This document provides information on the costs of using key cloud services (Google Cloud Storage, Google Computer Engine, and Google BigQuery). as well as examples to help you make informed decisions on controlling costs on Terra. For up-to-date billing information, see the documentation for Google Cloud Pricing.
Content for this article was contributed by Matt Bookman from Verily Life Sciences based on work done in Terra for AMP PD, a public/private partnership collaborating toward biomarker discovery to advance the development of Parkinson’s Disease therapies. |
Cloud costs overview
The Terra platform is free to use. For example, you can browse showcase workspaces and the Data Library as soon as you register for an account. However, operations in Terra - such as running workflows, running Jupyter Notebooks, and accessing and storing data - may incur Google Cloud charges. These charges are billed by Google Cloud and paid through your Terra Billing project.
Storage | Compute and disks | Query processing | Data transfer out (egress) | Data retrieval
Common use-case examples
See several examples below of things you might do on Terra and their associated Google Cloud costs.
Running a workflow
- Converting a CRAM to a BAM
- Aligning a genomic sample to a reference and performing variant calling using GATK Best Practices WDLs
- Aligning a transcriptomic sample to a reference using STAR
Running a notebook
- Performing quality control checks on genomic data
- Analyzing genomic variants
- Visualization and analysis (in R or Python) on outputs from running a workflow
Storing files in Cloud Storage (Google Buckets or BigQuery)
- Notebooks (ipynb files)
- Clinical data (CSV or TSV files)
- Genomic data
- Transcriptomics data
Google Cloud Storage
Google Cloud Storage (GCS) is an object store where objects are stored in buckets. You can think of it as a place to store files in a structure similar to folders or directories. For more details, see How subdirectories work.
Storage has a cost. Additionally, accessing or moving data out of GCS may incur charges (data transfer out), depending on where the data is stored and where the data will be accessed from.
Storage cost considerations (primary drivers)
Below are questions that may influence the cost of data storage for you or anyone accessing your data. To learn more, see Customizing where your data are stored and analyzed.
How much are you storing?
A key cloud concept is that you only pay for what you use. Thus you don't need to preallocate storage in GCS (like buying an array of disks); you simply pay for what you store.
Where do you store it?
Google Cloud Storage provides several different storage classes, each with different pricing. The options are primarily based on where you want to store the data and how frequently you access it. Storing data in more locations (multiple "regions") is more expensive than storing data in fewer locations ("regional").
See US multi-region versus regional storage: tradeoffs. For more information, read about Google Cloud regions and bucket locations.
How frequently do you access it?
Data you access frequently should be stored in a more expensive storage tier. Data you access infrequently can be stored in less expensive "cold" storage.
Storage classes cost options
Multiregional versus regional storage
Multiregional storage is the most expensive option at $0.026 per GB per month. Regional storage is less expensive at $0.020 - $0.023 per GB per month (depending on what region - see Google's pricing tables for details).
Multiregional storage is most appropriate for data that need to be accessed quickly and frequently from many locations (e.g., for a website or gaming).
This is not typically the case for genomic or transcriptomic research data. With these data types, overall access frequency is low and the emphasis is on managing storage costs.
Nearline and Coldline Storage
For data that are accessed very infrequently, Google Cloud offers Nearline and Coldline storage. These storage classes offer significantly reduced costs for storage ($0.010 per GB for Nearline and $0.004 per GB for Coldline), but add a retrieval charge ($0.01 per GB and $0.05 per GB for Coldline).
These storage classes are most appropriate for archiving data, for example, after processing FASTQs into BAMs or CRAMs.
Data transfer cost considerations
Data transfer charges apply when copying GCS data out of the region(s) where the data are stored.
Data transfer examples
- Copying data stored in one region to a compute engine virtual machine (VM) or Cloud Environment persistent disk in another region.
- Copying data stored in one region to a GCS bucket in another region.
- Copying data stored in a multi -region bucket to a regional GCS bucket.
- Downloading data to your workstation or laptop.
Network data transfer charges vary, but copying data to a Google Cloud storage location within the United States typically costs $0.01 per GB.
Data transfer out of GCS
Downloading to your local workstation, laptop, or anywhere else outside of Google Cloud is subject to General network pricing charges.
Amount of data | Cost to transfer (per GB) |
0-1 TB | $0.12 |
1-10 TB | $0.11 |
10+ TB | $0.08 |
Accessing GCS data from within the same Cloud region where the data are stored incurs no data transfer charges.
Retrieval cost considerations
Retrieval costs apply only to the "cold storage" classes: Nearline and Coldline.
Retrieval applies when you
- Copy data from a cold storage bucket
- Move data within a cold storage bucket (a move is a copy followed by a deletion)
Google Compute Engine
Google Compute Engine (GCE) provides virtual machines (VMs) and block storage (disks) which can be used for running analyses such as converting a CRAM file to a BAM file or running a Jupyter Notebook to transform and visualize data.
Compute and disks concepts (VMs)
GCE allows you to create and destroy VMs as you need them. You can create VMs of different shapes and sizes (CPU and memory) for different workloads.
GCE follows the cloud philosophy that you only pay for what you use, and you are only billed for VMs and disks between the time that you create them to the time you destroy them. To be clear, however, you "use"(i.e., build up charges) on your CPU, memory, and disk space while your VM is runs, even if it sits idle.
GCE's virtualization offers additional flexibility in that you can "stop" a running VM (at which point you stop being charged for the CPU and memory, but continue accruing charges for the disk) and "start" it again later. You can even change the amount of CPU and memory when you restart the VM.
Saving money with preemptible VMs
GCE offers significantly reduced costs for using preemptible VMs. If you have a workflow that will run in fewer than 24 hours, you can save up to 80% by using preemptible VMs. To learn more, see Controlling Cloud costs - sample use cases.
Compute costs
Detailing GCE pricing flexibility is beyond the scope of this document. See pricing details from the GCE Pricing documentation.
Questions to ask
- How many CPUs does my compute task require?
- How much memory does my compute task require?
- How much disk does my computer task require?
- Can my compute task finish in fewer than 24 hours?
If your compute need is for a long-running compute node, you should use a "full priced VM", since a preemptible VM lasts 24 hours at most. If your compute need is for fewer than 24 hours, and you can manage the complexity of preemption at any time within that 24 hours, a preemptible VM will cost almost 80% less. For more information, see the Google documentation on Preemption selection.
Disk costs
GCE offers a range of disk types, including
- Network-attached magnetic disks (persistent disk standard).
- Network-attached solid state disks (persistent disk SSD).
- Locally-attached solid state disks (local SSD).
What disk type is right for you?
In general, you pay more for large disks and more performance disks. Most life sciences workflows are not I/O bound, so the least expensive disk (Persistent Disk Standard) is typically the best choice. If your workflow is I/O bound, however, you may find that using Local SSDs on a preemptible instance is the best choice.
Data transfer costs
Data transfer charges apply when copying data out of the zone that a compute engine VM is running in.
Data transfer examples
- Downloading data to your workstation or laptop.
- Copying data from a VM in one zone to a VM in another zone.
- Copying data from a VM in one region to a GCS bucket in another region.
- Analyzing data stored in a different cloud provider in GCE
No data transfer charges accrue for data copied between VMs in the same zone, or to copy data between a VM and a GCS bucket in the same zone.
Note that the amount of Always Free Internet data transfer is currently 100 GB per month to each qualifying data transfer destination.
Google BigQuery
Google BigQuery (BQ) is a database where "tables" are stored in "datasets," including both tabular data and nested data. You can issue SQL queries to filter and retrieve data in BigQuery.
See this Google blog on Cost Optimization Best Practices for BigQuery.
Storing and accessing data in BigQuery have associated costs!
When you query data in BigQuery, consider just how much data your query "touches," as BigQuery query billing is based on the amount of data that the query engine "looks at" to satisfy the request.
BigQuery Storage Costs
BigQuery storage costs are $0.02 per GB for the first 90 days after table creation and $0.01 per GB from then on.
Query costs
When you run a query, you're charged by the number of bytes processed in the columns you select or filter on, even if you set an explicit limit on the number of records returned. Be careful about which columns you put in your SELECT lists and WHERE clauses.
BigQuery query costs are $5.00 per TB, with the first 1 TB per month free.
Resources for controlling query costs
BigQuery offers a number of features to help control query costs. See:
BigQuery data transfer costs
BigQuery does not include explicit network data transfer charges; however, BigQuery has limits on the amount of data you can data transfer. A query has a maximum response size — 10 GB compressed.
Helpful hint: When issuing a query that returns a large amount of data, write the results to another BigQuery table or a GCS bucket.
Controlling storage costs (large data)
While many life sciences projects commit time and energy to optimizing their data-processing workflows, often long-term storage costs dominate the budget. The reason for the high storage costs is the huge amount of data generated in the life sciences, such as genomic and transcriptomic. The following sections provide tips for keeping storage costs of large data under control.
1. Use regional storage
For life sciences data, there is rarely a reason to make data available in multiple Google Cloud regions. The cost of regional storage is 77% of that for multiregional storage. The easiest way to save your project 23% is to put your data and compute in a single region.
2. Compress large data
Compression rates vary, but some common options are:
- Compress STAR-generated BAMs (and index them) with samtools (discussed above).
- Convert WGS BAMs to CRAMs (and index them) with samtools.
- Compress VCFs with bgzip (and index them with tabix.
3. Move data to cold storage (Nearline or Coldline)
Deciding whether you can move large files to cold storage can be tricky. If you move files that are accessed frequently, the access charges can wipe away the storage savings. Note that Terra workspace buckets have autoclass enabled by default. Autoclass automatically transitions objects in your bucket to appropriate storage classes based on each object's access pattern. The feature moves data that is not accessed to colder storage classes to reduce storage cost and moves data that is accessed to Standard storage to optimize future accesses.
However, much life science data goes through a life cycle of
- Source data is generated.
- Source data is processed into smaller summary information.
- Summary information is used extensively.
- Source data is used rarely.
FASTQ files for genomics and transcriptomics fit this model and are large. Moving these files to Nearline after initial processing can save a project a lot of money on its largest data.
4. Clean up intermediate files promptly
WDL-based workflows on large files, such as FASTQs, BAMs, and gVCF often have intermediate stages where large files are sharded or converted to different formats, creating many artifacts that get stored in Google Cloud Storage. Leaving these files in Cloud Storage can result in significant costs associated with running workflows. If the workflow succeeds, clean up the intermediate files, especially the large ones.
Controlling compute costs
Many people in the life sciences are familiar with working in an HPC environment. In this case, they have a compute cluster available to them. This cluster is typically a modest fixed size and is often shared with other researchers and departments. The primary driver for computation is toward having jobs finish quickly and minimizing compute resources (CPUs, memory, and disk).
In this environment, if you have 1,000 samples to process, each takes a day to process, and available computing for 100 samples to run concurrently, then such processing will finish in 10 days (if all goes well). If you can reduce the time to process a single sample by 30%, you'll finish your processing in a week.
With cloud computing, generally, you are not constrained by resources in the same way. If you want to run 1,000 samples concurrently, you can do that (just be sure to request more Compute Engine Quota; if working in Terra, see this article). Reducing runtimes and compute resources will save you money; but you have other money-saving knobs to turn on, notably with preemptible VMs. Life science workflow runners, like Cromwell, are designed to take advantage of preemptible VMs).
To save on compute costs, approach optimization in the following order
- Use preemptible VMs.
- Reduce the number of CPUs (they are the most expensive resource).
- Reduce the amount of memory (add monitoring to your workflows).
- Reduce the amount of disk used (add monitoring to your workflows).
Below are some specific suggestions around preemptible VMs and monitoring.
1. Use Preemptible VMs
Cromwell can use preemptible VMs and for each task, you can set a number of automatic retries< before falling back to a full-priced VM.
Some additional details to know about using preemptible VMs
- Smaller VMs are less likely to be preempted than large VMs.
- Preemption rates are lower during nights and weekends.
- IO-bound workflows may benefit from using Local SSDs on preemptible instances.
- Preemptions tend to happen early in a VM's lifetime
This last bullet point is important to understand. It is explained further in Google's documentation.
Generally, Compute Engine avoids preempting too many instances from a single customer and will preempt instances that were launched most recently. In the long run, this strategy helps minimize lost work across your cluster. Compute Engine does not charge you for instances if they are preempted in the first minute after they start running.
So while running on a preemptible VM and getting preempted adds cost overhead (cutting into your savings), such preemptions tend to happen early and the additional cost is modest.
2. Monitor peak use
It is difficult to save on CPUs, memory, and disks if you don't know your peak usage while workflows are running. Adding a little bit of monitoring can go a long way to help understand these usage requirements.
Observations you may make about a workflow stage, once you add monitoring:
This workflow stage for the largest sample uses
about the same <cpu, memory, disk> as the smallest sample much more <cpu, memory, disk> as the smallest sample.
With this information, you can decide whether it is worthwhile to adjust cpu, memory, or disk on a per-sample basis.
You might also observe:
- This workflow stage runs a sequence of commands, and the disk usage never goes down.
- If you clean up intermediate files while running, you can allocate less disk space for each workflow.
- This workflow stage runs a sequence of commands; some are multithreaded and take advantage of more CPUs, and some commands are single-threaded.
- If you make this a multistage workflow, you can use a single CPU VM for some steps and reduce total CPU cost.
- This workflow runs on an n1-standard machine, but it never uses all of the memory.
- You can change to an n1-highcpu machine (or a custom VM).
Controlling data transfer costs
Use preemptible VMs to copy or move from a multiregional bucket to a regional bucket.
More details on controlling data transfer costs
Moving data from a multiregional bucket to a regional bucket incurs data transfer charges at a rate of $0.01/GB.
For example, this means that moving 100TB of data from a Terra workspace bucket (single region US) to your own multi-regional bucket will cost $1,000.
Suppose that 100 TB of data files are made up of one thousand 100 GB files. You could create a workflow on Terra that runs 1000 concurrent n1-standard-1 preemptible VMs, each with a 200 GB disk to:
- Copy file from multiregional bucket to VM
- Copy file from VM to Regional bucket
- Remove the file from the multiregional bucket
Each VM + disk would cost approximately $0.02 per hour and would finish in less than 1 hour. Your cost for transfer is thus on the order of $20.
To learn more, see Controlling Cloud costs - sample use cases.