I have docker running as a helm release (once I’ve figured out some of the finer points I’ll share my chart). I have a bunch of credentials stored as json keys on my local machine that I need to be made available for one of my builds. the first thing I tried was creating a secret, mounting that secret on my drone-agent container, and then mounting that volume in my build. But for some reason my build just found an empty directory where my secret json files should have been.
It is entirely possible that I meesed up my configuration somewhere along the way. I’m currently going through it with a file toothed comb.
My questions are:
- Am I on the right path?
- Does drone support secret volume mounts in some non-convoluted way? I cant find any documentation on the topic
Drone has its own secret management system, where you manage secrets in the user interface. You can read more about it here http://docs.drone.io/manage-secrets/
I tried was creating a secret, mounting that secret on my drone-agent container, and then mounting that volume in my build
Builds are run on the host machine, and are siblings of the agent container. They are not run inside the agent container, and will therefore not have access to files or folders mounted inside the agent container.
Does drone support secret volume mounts in some non-convoluted way? I cant find any documentation on the topic
Drone does not support secret volume mounts. Drone supports regular volume mounts.
If you have files on your host machine that you need to mount inside your pipeline containers, you can use the volumes section of the yaml configuration file, as demonstrated below:
- go build
- go test
Ok. So that means that if I want to make my secret files reliably available to drone on k8s as a volume, I need to copy those secrets onto every node in my cluster at the same location? Golly.
I guess what I’ll do is store them as normal secrets using drone’s ui then cat them to files and then access those files in my build process.
Thanks for the answer
@bradrydzewski is there any way to mount a volume to the build, so that caching uses that volume? Right now I’m mounting /cache onto the agent container, but upon trying to cache, it says /cache is a readonly file system. You said builds are not ran in the agent container, so where would I need to mount the /cache volume?
You can mount host machine volumes directly into your build environment. Check out http://docs.drone.io/docker-volumes/
If you are using the volume cache, you can see an example in the plugin docs:
Thanks I am using those correctly. It works fine on it’s own. The problem is, if I mount a volume to the agent container with kubernetes. If I attempt to use that directory as the volume mount, instead of
/tmp, it gives a ‘cannot mkdir, read only’ error. I know for a fact the disk is writable. Not sure how to fix this. If I use the tmp folder it gets full and breaks the build a few times later. If you have any insight I’d appreciate it, and thanks for the quick responses, you’ve been so helpful!
Can I ask why you would mount a volume to the agent? The agent does not read or write anything to disk.
So I wasn’t sure exactly how it worked. I should be mounting a volume to the main drone instance then?
Can you provide a bit more context with regards to your use case? What is the reason for mounting a volume to the agent or server, and what problem are you hoping it will solve? Thanks!
I am using drone-volume-cache to cache my builds before and after. Works great, except it runs out of space. So I want to mount a 200gb volume to the host machine (which i thought was the agent) and then mount from that folder. But upon mounting the folder I get a read-only volume error when trying to mount and create the directory.
So I want to mount a 200gb volume to the host machine, which i thought was the agent
To clarify terminology, the host machine refers to the host operating system on which Docker is installed. Practically speaking, this is typically the physical or virtual server instance. If you create an AWS instance, the AWS instance is the host machine. The agent is therefore just a container running on the host machine, and is not the host machine itself.
When you setup the agent, you mount the host machine docker socket into the agent as a docker volume. The agent uses this socket to create containers on the host. This means your pipeline containers (spawned by the agent) are sibling containers of the agent. Or to put this another way, if you run
docker ps on the host machine while a build is running, you will see your agent containers and your pipeline containers in the same list.
For this reason, mounting a volume into an agent or server container is not the solution you are looking for.
So let’s say you want to attach an EBS volume to an AWS instance, for example, with a mount point of
/data. If you want to write to this volume from inside your pipeline, you will mount
/data into your container like this:
volumes: [ /data:/cache ]
Thanks, that clears it all up for me!
Sorry, last question, I know this isn’t drone’s fault but maybe you have more experience than I do. I can’t figure out how to mount a volume onto the host node itself (I’m using Kubernetes to host drone) so I can’t mount the cache drive. If there’s any tutorials out there on caching with google cloud or using a mounted drive with kubernetes that would help a lot but I couldn’t find anything. If you happen to know of anything like that I’d appreciate it.