@Kash there are two different scenarios that Drone users need to account for when it comes to the docker plugin and rate limiting.
Scenario 1: Build and Publish to Dockerhub
If you are building and publishing to Dockerhub, you are already providing a username and password to the plugin. The plugins always executes a docker login before it builds and publishes the image. This means that image pulls are going to be authenticated and will receive increased rate limits. If the username and password are associated with a paid dockerhub account, there will be no pull limit.
The recommended solution to rate limiting in this scenario is to ensure the username and password provided to the plugin are associated with a paid account. You could even use the organization secrets feature to provide this at an organization level. However, this may require changing yaml files or changing existing secrets.
Scenario 2: Build and Publish to other Registry, but Dockerfile references Dockerhub Images
If you are supplying a username and password to the dind plugin, but the username and password is for another registry (for example, quay) and if your dockerfile pulls from dockerhub, the pulls would not be authenticated and would be subject to rate limits.
The recommended solution in this scenario is to provide dockerhub credentials using the config parameter. Note that these credentials should be associated with a paid account to avoid rate limiting. The config parameter should contain the contents of a docker/config.json file.
- name: build
image: plugins/docker
settings:
repo: quay.io/foo/bar
username:
password:
config: |
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "c3R...zE2"
}
}
}
The config file can also be sourced from a secret:
config:
from_secret: ...
Global Solution: Using a Global Mirror
Finally, a global solution to this problem is to setup a registry mirror. We have some customers that setup and mirror to proxy and authenticate requests to dockerhub. I am not sure how to configure registry mirrors with kubernetes, but with docker you can configure mirrors by editing the docker daemon.json file.
The docker plugin also provides an option to configure a mirronr in the yaml. You could globally configure this mirror for all pipelines by configuring the following a global environment variable with your runner. This would be the equivalent of setting the mirror in every yaml.
PLUGIN_MIRROR=https://docker.company.com