Configure GCR/Artifact Registry to prevent egress charges

Allie Hajian
  • Updated

This document provides instructions for users sharing docker images to avoid egress charges by creating a service perimeter around the Cloud project that contains your Container Registry or Artifact Registry.

Source material for this article was contributed by Matt Bookman and the Verily Life Sciences solutions team as part of the design and engineering rollout of Terra support for data regionality.

Overview

Owning a public Container Registry or Artifact Registry is a useful way to broadly share docker images. However, copying the image out of GCP or to a different GCP region than the image storage region could incur significant network egress charges. These charges are paid by you (the image owner), not by the end user.

Fortunately, it is possible to avoid network egress charges through the use of Google Cloud's VPC Service Controls. This document provides instructions on how to create a service perimeter around the Cloud project that contains your Container Registry or Artifact Registry.

Example

The following example demonstrates configuration for both Container Registry and Artifact Registry. Note that Container Registry images are stored and served from Google Cloud Storage. Thus Container Registry configuration is on the Cloud Storage APIs, while Artifact Registry configuration is on the Artifact Registry APIs.

For concreteness, this example is for images stored in the  us-central1 region. VPC service controls are added to prevent egress outside of this region.

Example Overview

In this example, we have an organization named testorg.net. In it, there is a project named test-project. When you put a project into a service perimeter, you can restrict the usage of Google Cloud services such as Cloud Storage. This would prevent data in Cloud Storage from leaving the perimeter. However, we also apply an Access Level, which allows for specific access to services inside the perimeter. The Access Level created in this example will allow an ingress of requests from specific IP ranges. We do not specify any egress rules, so only VMs allowed in through the access level can download the Cloud Storage data.

Container Registry Cloud Resources

In  test-project, there is a container registry named

us.gcr.io/test-project.

In this registry we have pushed an image named

my-image.

There is also a multi-regional US bucket used to store the image data named 

us.artifacts.test-project.appspot.com.

Our goal in this example is to create a perimeter such that we can restrict access on 

us.artifacts.test-project.appspot.com.

only to VMs in us-central1. Note that a more liberal solution would allow VMs in other US regions.

Artifact Registry Cloud Resources

In test-project, there is a registry named

us-central1-docker.pkg.dev/test-project/docker-us-central1.

In this registry we have pushed an image named

my-image.

Our goal in this example is to create a perimeter such that we can restrict access on

us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image

only to VMs in us-central1.

VPC Configuration

Before you begin

Creating the Access Level and Perimeter requires an access policy to exist for your organization. If it doesn't exist yet, create an access policy for your organization. Organizations can only have one access policy. If you attempt to create an access policy while one already exists for your organization, you will receive an error.

We also recommend creating the following environment variables for the configuration process:

$ export PROJECT_NUMBER=<The project number>
$ export PROJECT_ID=<The project ID>
$ export ORGANIZATION_ID=<The organization ID>
$ export POLICY_ID=<The project access policy ID>
$ export PROJECT_ADMIN_EMAIL=<Project administrator email>

# You can retrieve your ORGANIZATION_ID with this command:

$ curl -X POST -H "Authorization: Bearer \"$(gcloud auth
application-default print-access-token)\""           -H "Content-Type:
application/json; charset=utf-8"             
https://cloudresourcemanager.googleapis.com/v1/projects/${PROJECT_NUMBER}:getAncestry

# This will return:
#{
#  "ancestor": [
#    {
#      "resourceId": {
#        "type": "project",
#        "id": <PROJECT_ID>
#      }
#    },
#    {
#      "resourceId": {
#        "type": "organization",
#        "id": <ORGANIZATION_ID>
#      }
#    }
#  ]
#}

# You can retrieve your POLICY_ID with this command:
$ gcloud access-context-manager policies list \
--organization=${ORGANIZATION_ID}

# This will return:
# NAME          ORGANIZATION     TITLE           ETAG
# <POLICY_ID>  <ORGANIZATION_ID> <POLICY_TITLE>  <POLICY_ETAG>

Create Access Level

First, we create an Access Level to allow access from the IP ranges of VMs in us-central1. The IP ranges are publicly available from https://www.gstatic.com/ipranges/cloud.json.

Note that restricting access to only these IP ranges will block the use of the Cloud Console UI to view the bucket. In order to continue using the Cloud Console UI we'll also give our individual account access. 

First, create a file named us_central1.yaml that contains:

$ head us_central.yaml
- members:
  - user:${PROJECT_ADMIN_EMAIL}
- ipSubnetworks:
  - 8.34.210.0/24
  - 8.34.212.0/22
  - 8.34.216.0/22
  - 8.35.192.0/21
  <snip>

You can get the full list of us-central1 IP ranges with something like:

$ curl https://www.gstatic.com/ipranges/cloud.json | \
jq -r '.prefixes | .[] | {scope: .scope, ip: .ipv4Prefix} | select(.scope ==
"us-central1") | {ip} | .[]'

Or if you prefer to use Python instead of jq:

$ curl https://www.gstatic.com/ipranges/cloud.json | \
python3 -c '

import sys, json
prefixes = json.load(sys.stdin)["prefixes"]
for p in prefixes:
if p["scope"] == "us-central1":
print(p["ipv4Prefix"]) 

Finally, use gcloud to create the access level:

$ gcloud access-context-manager levels create us_central1_only \
--title=us_central1_only \
--basic-level-spec=us_central.yaml \
--policy=${POLICY_ID} \
--combine-function="or"

Create Perimeter

Next, we need to create a perimeter that uses the above access level. This perimeter will be placed around test-project and enforced on the Google Cloud Storage service.

$ gcloud access-context-manager perimeters create new_perimeter \
  --title=new_perimeter \
  --resources=projects/${PROJECT_NUMBER} \
  --access-levels=us_central1_only \
  --restricted-services=storage.googleapis.com \
  --policy=${POLICY_ID}

If you are using Artifact Registry, you would instead add enforcement on the Artifact Registry service.

$ gcloud access-context-manager perimeters create new_perimeter \    
  --title=new_perimeter \
  --resources=projects/${PROJECT_NUMBER} \
  --access-levels=us_central1_only \
  --restricted-services=artifactregistry.googleapis.com \
  --policy=${POLICY_ID}

If you are using both, you can specify them together:

$ gcloud access-context-manager perimeters create new_perimeter \    
  --title=new_perimeter \
  --resources=projects/${PROJECT_NUMBER} \
  --access-levels=us_central1_only \
  --restricted-services=storage.googleapis.com,artifactregistry.googleapis.com \
  --policy=${POLICY_ID}

Tests

From a us-central1 VM (success)

$ curl http://metadata.google.internal/computeMetadata/v1/instance/zone -H 
"Metadata-Flavor: Google"
projects/<project-number>/zones/us-central1-a

$ docker pull us.gcr.io/test-project/my-image
Using default tag: latest
latest: Pulling from test-project/my-image
29291e31a76a: Pull complete
Digest: sha256:be9bdc0ef8e96dbc428dc189b31e2e3b05523d96d12ed627c37aa2936653258c
Status: Downloaded newer image for us.gcr.io/test-project/my-image:latest
us.gcr.io/test-project/my-image:latest

$ docker pull     us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image:latest
latest: Pulling from test-project/docker-us-central1/my-image
f8416d8bac72: Pull complete
3d1fe1074eae: Pull complete
01ee43ff2a96: Pull complete
83c2515dd8ac: Pull complete
84e681791894: Pull complete
Digest: sha256:50e21e0bac13e1dfa37626d1c05433cc29e0f1d15fa390e2ecbae32221c6646d
Status: Downloaded newer image for us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image:latest
us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image:latest

From a European VM (fail)

$ curl http://metadata.google.internal/computeMetadata/v1/instance/zone -H 
"Metadata-Flavor: Google"
projects/<project-number>/zones/europe-west2-c

$ docker pull us.gcr.io/test-project/my-image
Using default tag: latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform
this operation, and you may have invalid credentials. To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication

$ docker pull     us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image
Using default tag: latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform
this operation, and you may have invalid credentials. To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication

From a workstation when NOT logged in as PROJECT_ADMIN_EMAIL (fail)

$ docker pull us.gcr.io/test-project/my-image
Using default tag: latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform
this operation, and you may have invalid credentials. To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication

Notes/Caveats

  • VPC Service perimeters are only available to projects with a Cloud Organization. See the documentation for Creating and managing organizations.
  • Management of VPC service perimeters requires organization-level permissions. If you do not have permissions at this level, consult with your organization's IT administrators to set up VPC service perimeters around a dedicated data sharing project, and work with them to configure it.
  • Putting a project in the service perimeter as described above places all Cloud Storage buckets or Artifact Registry registries in the project inside the perimeter. Thus you will probably want to create a dedicated project (without other Cloud services enabled) for  buckets and registries in the same location with the same restrictions.
  • The above configuration restricts direct copies from bucket-to-bucket, even if the bucket is in the same region. If you want to copy an image from one registry to another, in the above example, you can pull the image to a VM in us-central1 and then push it to any target registry to which you have access.
  • We've considered if the Storage Transfer Service API should also be restricted. We believe the answer is no, because the Storage Transfer Service eventually calls Cloud Storage APIs, which will be checked appropriately.
  • Configure GCS to prevent egress charges also exists. It is similar to this doc, but specific for GCS access.

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request

Comments

0 comments

Please sign in to leave a comment.