DRONE_TLS_AUTOCERT running v1.x.x errors with 'missing server name'

I installed a new instance of Drone v1.2.1 on my kubernetes cluster. I followed the installation guide (which appears to be written for 1.0.0 but hoping it’s still good). I have the server working without http but tried to use the autocert option to configure my SSL certs. My cluster is hosted with GKE and I always configure a readiness probe which translates to a backend healthcheck via the GKE Ingress API. When my readinessprobe runs I see this error

2019/07/21 06:46:25 http: TLS handshake error from 10.44.0.1:38584: acme/autocert: missing server name

When I attempt to access any path on the server I see this in the logs

2019/07/21 06:46:29 http: TLS handshake error from 127.0.0.1:52150: acme/autocert: server name component count invalid

I have tried a number of things but haven’t gotten the certs to auto configure yet. When I inspect the running container I see some existing certs at /etc/ssl/cert.pem which appears to be issued by a CA in Spain. It has the CN=ACCVRAIZ1 and SAN=email:[email protected]

I do not see any certificates for my DRONE_SERVER_HOST however. In other situations I have configured certificates for drone via certmanager and probably will be able to do so again, but I’d like too figure out where I’m going wrong with the autocert module. I definitely have DRONE_SERVER_HOST set to my domain.

Any suggestions or help is appreciated.

yes, the docs are valid for 1.x

the x/crypto/acme/autocert package is configured to write certificates to /data/golang-autocert. Make sure you mount /data as a rw volume.

$ ls /data/golang-autocert/
acme_account+key        cloud.drone.io

certificates in /etc/ssl come from the alpine ssl certificate package which is installed with apk add ca-certificates. I cannot speak to the CA or email address, however, if you have specific questions I recommend reaching out to the package maintainer.

maybe the default kubernetes environment variables are causing some sort of conflict? we have seen this before where kubernetes {SERVICE}_HOST and {SERVICE}_PROTO variables cause problems depending on how things are named. https://kubernetes.io/docs/concepts/containers/container-environment-variables/#cluster-information

you would be able to rule out configuration issues by looking for the following entry in your logs and checking the host, address, port and protocol

{
  "acme": true,
  "host": "cloud.drone.io",
  "level": "info",
  "msg": "starting the http server",
  "port": ":443",
  "proto": "https",
  "time": "2019-07-15T03:02:22Z",
  "url": "https://cloud.drone.io"
}

I’m running on GKE and had some networking issues which were causing the validation to fail, but looks good now. I’m using a L4 loadbalancer (Kubernetes Service with type LoadBalancer) to handle requests to the dashboard and am using this annotation to differentiate between HTTP and HTTPS.

  annotations:
    cloud.google.com/app-protocols: '{"<tls-port-name>":"HTTPS","<non-tls-port-name>":"HTTP"}'

Thanks for your help