[Artifact Servers] S3 - How to provide cross-account access via Bucket Policies (no Cross-Account Assume Role needed)

Howdy, gang! :rocket:


Harness has a lot of customers with very different Artifact Sources. One of the most famous is to use an S3 Bucket as the home for all kinds of artifacts.

And our Engineers are constantly working to enable the S3 in many other integrations we have available. S3 and GCS are first-class citizens in our world, right?

In the AWS realm, I know that a lot of Solutions Architects will come with a multi-account approach.
Sometimes, depending on the size of the business, 4 Accounts will be suggested for each Squad under a given AWS Organization:

  • A DEV Account;
  • A STG Account;
  • A PRD Account;
  • A DevTools and DevOps Account.

And if you are good at DevOps practices, you’ll have one single source of truth for your Artifacts. And that’s where cross-account access comes into play!

Let’s see how Harness can overcome the current design of the S3 SDK/CLI. It’s super easy, to be honest.

Buckle up! :rocket:



We’ll work in a scenario with two Accounts:

  • Account A: will be the owner of the bucket. Let’s call it DevOps/DevTools Account;
  • Account B: will be our PRD Account. The one that needs to access the cross-acc S3 Bucket Objects.

First Step

Let’s define a good Artifact Source. I’ll store a couple of dummy Helm Charts (.tgz) in the S3 Bucket.
I could use GitHub Pages to start a free HTTP Server for my Charts, but that’s not the case today.

All files are only for learning purposes… they are dummies:

Second Step

I guess it’s time to define a good S3 Bucket Policy, using the The principle of least privilege.

It’s important to mention that you can make our Harness Delegate Assume a Role in another Account.
Or even better, if you use your Delegate in EKS: you can use IRSA to make Harness Delegate Pod Service Account get mapped to an IAM Role.

But, for S3, there’s no need for overkill, right?

In our documentation, we suggest this Policy:

But currently, a more restrictive policy is also working. Please keep in mind that the least privilege is a continuous effort in an always-changing world.

I will use a conditional to allow the Principals from Account B to reach that bucket, and give all the actions that Harness currently needs to perform the task. Here:

    "Version": "2008-10-17",
    "Statement": [
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::<your_bucket>/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
            "Effect": "Deny",
            "Principal": "*",
            "Action": [
            "Resource": "arn:aws:s3:::<your_bucket>"
            "Sid": "Allow list access from the org",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
            "Resource": [
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalOrgID": "<your_org>"

Third Step

Alright, let’s check it out!
I have created this Service, as an example, because it allows S3 as an Artifact Source:

Very Important: currently, Harness and AWS CLI will not list cross-account buckets. But that does not mean that the API will not be able to retrieve the objects. So, we can explicitly define that in the Artifact Source Config Step.
Naturally, the Cloud Provider I’m picking is from Account B (matching the policy’s Org Conditional), not from the S3 Bucket Owner account:


Last Step

Now let’s wait for Harness async Task to grab our loved Artifacts.

import time


Just kidding :grin:

Here it goes:


Now you are not afraid of Cross-Account S3 strategies, I hope.

Further reading:



<cloud: aws, azure, tanzu, gcp, on-prem, kubernetes, github, docker, container registry, s3>
<function: ci,cd>
<role: swdev,devops,secops,itexec>
<type: howto, experts>
<category: triggers, gitops, templates, s3>