drone.io on a kops configured kubernetes cluster. Any build steps that require network access (including the initial git clone) fail to resolve dns names.
kubernetes cluster running v1.7.2 provisioned using kops
tried various networking options [flannel, canal…]
tried drone v6, v7 and latest v8rc with same results
From build container:
Couldn’t resolve host on clone which appears to be the same problem however I haven’t been able to find a solution.
Has anyone successfully run a recent version of drone on kubernetes? How do I configure the build agent such that it can access network resources?
Happy to provide any further information as required.
Couldn’t resolve host on clone on how we resolved the issue when drone-agent was using host’s docker for builds.
However, in the end, we settled on starting drone-agent and docker-in-docker containers within the same pod.
The agent was configured to talk to the docker daemon via tcp transport (tcp://127.0.0.1:2375).
I recommend that you verify your host machine can properly configure DNS for user-defined networks. You should be able to test this with the below command. You can substitute
github.com with the hostname of your version control system.
docker network create foo
docker run --network=foo -t -i alpine ping -c 1 github.com
I am told that most people have resolved this with iptables rules. If you following the link in zaa’s previous command you will see he modified his iptables configuration to resolve.
We experienced the issue too while running drone in kubernetes cluster created with help of “kops” tool. The cluster was using “calico” for networking. It turned out that in such configuration docker is running with --iptables=false --ip-masq=false flags. In such mode docker can’t add masquerading rule for the user defined network into iptables, so packets from the networks are not NAT’ed. I’ve added the masquerading iptables rule manually and this resolved the issue.
I also agree with
@zaa that a better approach would be to run a drone agent and docker:dind container in the same pod, instead of connecting drone with the docker daemon on your host machine.
This is also the approach they are taking with the drone helm chart
Thanks for the prompt replies. It definitely pointed me in the right direction. Adding a dind container to my to my helm chart for agent deployment made all the difference.