How to configure Google Artifact Registry to prevent data transfer (egress) charges

Allie Hajian

This document provides instructions for users sharing Docker images to avoid data transfer (formerly "egress") charges by creating a service perimeter around the cloud project that contains your Artifact Registry.

Source material for this article was contributed by Willy Nojopranoto and the Verily Life Sciences solutions team as part of the design and engineering rollout of Terra support for data regionality.

Overview: Preventing data transfer charges

Owning a public Artifact Registry (formerly Google Container Registry) is a useful way to broadly share Docker images. However, copying the image out of Google Cloud or to a different Google Cloud region than the image storage region can incur significant network data transfer charges. These charges are paid by you (the image owner), not by the end user.

Fortunately, it is possible to avoid network data transfer charges through the use of Google Cloud's VPC Service Controls. This document provides instructions on how to create a service perimeter around the cloud project that contains your Artifact Registry.

Example Artifact Registry configuration

The following example demonstrates the configuration for an Artifact Registry with images stored in the  us-central1 region. VPC (Virtual Private Cloud) service controls are added to prevent data transfer outside of this region.

Example Overview

In this example, we have an organization named testorg.net. In it, there is a project named test-project. When you put a project into a service perimeter, you can restrict the usage of Google Cloud services such as Cloud Storage. This prevents data in Cloud Storage from leaving the perimeter. However, we also apply an Access Level, which allows for specific access to services inside the perimeter. The Access Level created in this example allows ingress of requests from specific IP ranges. We do not specify any data transfer rules, so only virtual machines (VMs) allowed in through the access level can download the Cloud Storage data.

Example Cloud Resources

  • Intest-project, there is a registry named us-central1-docker.pkg.dev/test-project/docker-us-central1.
  • In this registry, we have pushed an image named my-image.

Our goal in this example is to create a perimeter such that we can restrict access on

us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image only to VMs in us-central1.

VPC Configuration

Before you begin

Creating the Access Level and Perimeter requires an access policy for your organization. If it doesn't exist yet, create an access policy for your organization. Organizations can only have one access policy. If you attempt to create an access policy and one already exists for your organization, you will receive an error.

We also recommend creating the following environment variables for the configuration process.

$ export PROJECT_NUMBER=<The project number>
$ export PROJECT_ID=<The project ID>
$ export ORGANIZATION_ID=<The organization ID>
$ export POLICY_ID=<The project access policy ID>
$ export PROJECT_ADMIN_EMAIL=<Project administrator email>
# You can retrieve your ORGANIZATION_ID with this command:
$ curl -X POST -H "Authorization: Bearer \"$(gcloud auth
application-default print-access-token)\""           -H "Content-Type:
application/json; charset=utf-8"              https://cloudresourcemanager.googleapis.com/v1/projects/${PROJECT_NUMBER}:getAncestry
# This will return:
#{
#  "ancestor": [ #    { #      "resourceId": { #        "type": "project", #        "id": <PROJECT_ID> #      } #    }, #    { #      "resourceId": { #        "type": "organization", #        "id": <ORGANIZATION_ID> #      } #    }
#  ] #}
# You can retrieve your POLICY_ID with this command: $ gcloud access-context-manager policies list \ --organization=${ORGANIZATION_ID}
# This will return: # NAME          ORGANIZATION     TITLE           ETAG # <POLICY_ID>  <ORGANIZATION_ID> <POLICY_TITLE>  <POLICY_ETAG>

Step 1: Create Access Level

First, we create an Access Level to allow access from the IP ranges of VMs in us-central1. The IP ranges are publicly available from https://www.gstatic.com/ipranges/cloud.json.

Note: Restricting access to only these IP ranges blocks the use of the Cloud Console to view the bucket. To continue using the Cloud Console, we give our individual account access. 

First, create a file named us_central1.yaml that contains:

$ head us_central.yaml
- members:
  - user:${PROJECT_ADMIN_EMAIL} - ipSubnetworks:   - 8.34.210.0/24   - 8.34.212.0/22   - 8.34.216.0/22   - 8.35.192.0/21   <snip>

You can get the full list of us-central1 IP ranges with something like:

$ curl https://www.gstatic.com/ipranges/cloud.json | \
jq -r '.prefixes | .[] | {scope: .scope, ip: .ipv4Prefix} | select(.scope ==
"us-central1") | {ip} | .[]'

Or if you prefer to use Python instead of jq:

$ curl https://www.gstatic.com/ipranges/cloud.json | \
python3 -c '
import sys, json
prefixes = json.load(sys.stdin)["prefixes"] for p in prefixes: if p["scope"] == "us-central1": print(p["ipv4Prefix"]) 

Finally, use gcloud to create the access level:

$ gcloud access-context-manager levels create us_central1_only \
--title=us_central1_only \
--basic-level-spec=us_central.yaml \
--policy=${POLICY_ID} \
--combine-function="or"

Step 2: Create Perimeter

Next, we need to create a perimeter that uses the above access level. This perimeter will be placed around test-project and enforced on the Google Cloud Storage service.

$ gcloud access-context-manager perimeters create new_perimeter \    
  --title=new_perimeter \
  --resources=projects/${PROJECT_NUMBER} \
  --access-levels=us_central1_only \
  --restricted-services=artifactregistry.googleapis.com \
  --policy=${POLICY_ID}

Test examples

From a us-central1 VM (success)

$ curl http://metadata.google.internal/computeMetadata/v1/instance/zone -H 
"Metadata-Flavor: Google"
projects/<project-number>/zones/us-central1-a
$ docker pull     us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image:latest latest: Pulling from test-project/docker-us-central1/my-image f8416d8bac72: Pull complete 3d1fe1074eae: Pull complete 01ee43ff2a96: Pull complete 83c2515dd8ac: Pull complete 84e681791894: Pull complete Digest: sha256:50e21e0bac13e1dfa37626d1c05433cc29e0f1d15fa390e2ecbae32221c6646d Status: Downloaded newer image for us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image:latest
us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image:latest

From a European VM (fail)

$ curl http://metadata.google.internal/computeMetadata/v1/instance/zone -H 
"Metadata-Flavor: Google"
projects/<project-number>/zones/europe-west2-c

$ docker pull     us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image
Using default tag: latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform
this operation, and you may have invalid credentials. To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication

From a workstation when NOT logged in as PROJECT_ADMIN_EMAIL (fail)

$ docker pull     us-central1-docker.pkg.dev/test-project/docker-us-central1/my-image
Using default tag: latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform
this operation, and you may have invalid credentials. To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication

Notes/Caveats

  • VPC Service perimeters are only available to projects with a Cloud Organization. See the documentation for Creating and managing organizations.
  • Management of VPC service perimeters requires organization-level permissions. If you do not have permissions at this level, consult with your organization's IT administrators to set up VPC service perimeters around a dedicated data-sharing project, and work with them to configure it.
  • Putting a project in the service perimeter as described above places all Cloud Storage buckets or Artifact Registry registries in the project inside the perimeter. Thus, you probably want to create a dedicated project (without other cloud services enabled) for buckets and registries in the same location with the same restrictions.
  • The above configuration restricts direct copies from bucket-to-bucket, even if the bucket is in the same region. If you want to copy an image from one registry to another, in the above example, you can pull the image to a VM in us-central1 and then push it to any target registry to which you have access.
  • We've considered if the Storage Transfer Service API should also be restricted. We believe the answer is no, because the Storage Transfer Service eventually calls Cloud Storage APIs, which will be checked appropriately.
  • It is also possible to Configure GCS to prevent data transfer charges. The process is similar to this doc, but specific for GCS access.

Was this article helpful?

0 out of 0 found this helpful

Comments

2 comments

  • Comment author
    Hernan J. Larrea
    • Edited

    Hi! Nice article, I have a question, so for the Access Level you are creating, you are using the advertised range of public IPs that VMs might get in the given region. But will this work if your VMs don't have public IPs? Is implementing a Perimeter still an option to prevent egress costs in an scenario where VMs are leveraging CloudNAT to access the internet instead of Public IPs directly attached to them? Thanks!

    0
  • Comment author
    WillyN

    Hi Hernan,

    In your case I think conceptually this all still works, but you will need to make adjustments to the created Access Level. Looking at https://cloud.google.com/nat/docs/overview, there are two things to highlight: 

    """You can reduce the need for individual VMs to each have external IP addresses. Subject to egress firewall rules, VMs without external IP addresses can access destinations on the internet. For example, you might have VMs that only need internet access to download updates or complete provisioning.

    If you use manual NAT IP address assignment to configure a Cloud NAT gateway, you can confidently share a set of common external source IP addresses with a destination party. For example, a destination service might only allow connections from known external IP addresses."""

    and

    """You can configure a Cloud NAT gateway to provide NAT for the following:

    Primary and secondary IP address ranges of all subnets in the region. A single Cloud NAT gateway provides NAT for the primary internal IP addresses and all alias IP ranges of eligible VMs whose network interfaces use a subnet in the region. This option uses exactly one NAT gateway per region.

    Primary IP address ranges of all subnets in the region. A single Cloud NAT gateway provides NAT for the primary internal IP addresses and alias IP ranges from subnet primary IP address ranges of eligible VMs whose network interfaces use a subnet in the region. You can create additional Cloud NAT gateways in the region to provide NAT for alias IP ranges from subnet secondary IP address ranges of eligible VMs.

    Custom subnet IP address ranges. You can create as many Cloud NAT gateways as necessary, subject to Cloud NAT quotas and limits. You choose which subnet primary or secondary IP address ranges should be served by each gateway."""

    So to me, this sounds like your VMs go through a Cloud NAT gateway. Your Access Control will need to allow traffic from the gateway. Though I'm not sure if you'll need to configure specific IPs of the gateway, or if you can make the configuration without specifying IPs. 

    0

Please sign in to leave a comment.