Kubernetes panic: runtime error: index out of range

I get this error sometimes in Kubernetes and it is not consistant:

$ kubectl get pod drone-job-7cmxg-wnbrb
NAME                    READY   STATUS   RESTARTS   AGE
drone-job-7cmxg-wnbrb   0/1     Error    0          34m
$ kubectl describe pod drone-job-7cmxg-wnbrb
Name:               drone-job-7cmxg-wnbrb
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               gke-drone-v1-default-pool-4909a299-gnp9/10.156.0.2
Start Time:         Thu, 20 Dec 2018 13:17:01 +0100
Labels:             controller-uid=27c7e027-0451-11e9-a87a-42010a9c0fd1
                    job-name=drone-job-7cmxg
Annotations:        kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container drone-controller
Status:             Failed
IP:                 10.24.0.127
Containers:
  drone-controller:
    Container ID:   docker://50f9f35b0e940b1a89d059957c7c0b2a59fb85ac3d9dde41d1a83f464b59c567
    Image:          drone/controller:linux-amd64
    Image ID:       docker-pullable://drone/[email protected]:3dca4ed11977984ed0a8770232519d76c53fce2c6f9a687b1b9ffa28d6c82c73
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 20 Dec 2018 13:17:02 +0100
      Finished:     Thu, 20 Dec 2018 13:18:42 +0100
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      DRONE_STAGE_ID:     65
      DRONE_STAGE_ID:     65
      DRONE_LOGS_DEBUG:   false
      DRONE_LOGS_TRACE:   false
      DRONE_LOGS_COLOR:   false
      DRONE_LOGS_PRETTY:  false
      DRONE_LOGS_TEXT:    false
      DRONE_STAGE_ID:     65
      DRONE_RPC_PROTO:    https
      DRONE_RPC_HOST:     drone.kevinsimper-test.com
      DRONE_RPC_SERVER:
      DRONE_RPC_SECRET:   b0ec4962c7b4924c34f04512131659b0
      DRONE_RPC_DEBUG:    false
      KUBERNETES_NODE:     (v1:spec.nodeName)
      DRONE_RUNNER_NAME:   (v1:spec.nodeName)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-555nk (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-555nk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-555nk
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                              Message
  ----    ------     ----  ----                                              -------
  Normal  Scheduled  34m   default-scheduler                                 Successfully assigned default/drone-job-7cmxg-wnbrb to gke-drone-v1-default-pool-4909a299-gnp9
  Normal  Pulled     34m   kubelet, gke-drone-v1-default-pool-4909a299-gnp9  Container image "drone/controller:linux-amd64" already present on machine
  Normal  Created    34m   kubelet, gke-drone-v1-default-pool-4909a299-gnp9  Created container
  Normal  Started    34m   kubelet, gke-drone-v1-default-pool-4909a299-gnp9  Started container
$ kubectl logs -f drone-job-7cmxg-wnbrb
{"arch":"amd64","build":34,"level":"info","machine":"gke-drone-v1-default-pool-4909a299-gnp9","msg":"runner: start execution","os":"linux","pipeline":"default","repo":"kevinsimper/cloud-native-report","stage":1,"time":"2018-12-20T12:17:03Z"}
panic: runtime error: index out of range

goroutine 140 [running]:
github.com/drone/drone/vendor/github.com/drone/drone-runtime/engine/kube.(*kubeEngine).Wait(0xc42000a420, 0x13997c0, 0xc420044018, 0xc42063ce00, 0xc4204c43c0, 0x1382a80, 0xc420192080, 0x0)
	/go/src/github.com/drone/drone/vendor/github.com/drone/drone-runtime/engine/kube/kube.go:194 +0x353
github.com/drone/drone/vendor/github.com/drone/drone-runtime/runtime.(*Runtime).exec(0xc42068ef80, 0xc4204c43c0, 0x0, 0x0)
	/go/src/github.com/drone/drone/vendor/github.com/drone/drone-runtime/runtime/runtime.go:214 +0x8e6
github.com/drone/drone/vendor/github.com/drone/drone-runtime/runtime.(*Runtime).execAll.func1(0xc400000008, 0x1300408)
	/go/src/github.com/drone/drone/vendor/github.com/drone/drone-runtime/runtime/runtime.go:130 +0x33
github.com/drone/drone/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1(0xc420556ac0, 0xc4206235a0)
	/go/src/github.com/drone/drone/vendor/golang.org/x/sync/errgroup/errgroup.go:58 +0x57
created by github.com/drone/drone/vendor/golang.org/x/sync/errgroup.(*Group).Go
	/go/src/github.com/drone/drone/vendor/golang.org/x/sync/errgroup/errgroup.go:55 +0x66

Can you provide steps to reproduce, for example, would a particular yaml allow me to reproduce the issue?

The problem is that it is random, I would if I could, and the yaml file just look like this:

kind: pipeline
name: default

workspace:
  base: /tmp
  path: "."

steps:
  - name: docker
    image: plugins/docker
    settings:
      repo: kevinsimper/kevinsimper
      dry_run: true

trigger:
  event:
    - push

Interesting, based on the stacktrace it looks like an issue with the below line. I wonder why the kubernetes API would return a empty container status … any ideas?

No not really, the image it runs is drone/controller:linux-amd64 and I didn’t change anything, so pretty much bare Kubernetes on GKE.

I’m running into this as well:

panic: runtime error: index out of range

goroutine 347 [running]:
github.com/drone/drone-runtime/engine/kube.(*kubeEngine).Wait(0xc00000ae60, 0x142c3e0, 0xc000044018, 0xc00081db20, 0xc0000181e0, 0x1415760, 0xc00020a8c0, 0x0)
	/go/pkg/mod/github.com/drone/[email protected]/engine/kube/kube.go:198 +0x358
github.com/drone/drone-runtime/runtime.(*Runtime).exec(0xc000488200, 0xc0000181e0, 0x0, 0x0)
	/go/pkg/mod/github.com/drone/[email protected]91445-ad403a0ca24e/runtime/runtime.go:228 +0x8a5
github.com/drone/drone-runtime/runtime.(*Runtime).execAll.func1(0xc000000008, 0x1324498)
	/go/pkg/mod/github.com/drone/[email protected]/runtime/runtime.go:144 +0x33
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc00073bd40, 0xc000234980)
	/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:58 +0x57
created by golang.org/x/sync/errgroup.(*Group).Go
	/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:55 +0x66

I notice the node is running into a EvictionThresholdMet, this could be happening because the nodes are running into memory pressure from builds?

So the namespace that the containers are running in get evicted and then drone hits an exception since it got evicted.

Just confirming the source of this error was the Kubernetes evicting the worker node due to memory pressure. I added a bigger node and this issue has been fixed.

@kevinsimper this is potentially what you are running into as well since the issue is happening at random