Currently, Terra supports two varieties of analysis: batch processing (pre-processing and processing genomics data such as aligning reads, variant and joint calling) and interactive statistical analysis and visualization in a Jupyter notebook (any analysis you can do in Python or R, you can do in a notebook). This article describes these in a bit more detail, with links to further resources.
Reads-to-variants workflows - i.e. batch processing with GATK best practices workflows
The GATK Best Practices provide step-by-step recommendations for performing variant discovery analysis in high-throughput sequencing (HTS) data. There are several different GATK Best Practices workflows tailored to particular applications depending on the type of variation of interest and the technology employed. The Best Practices documentation attempts to describe in detail the key principles of the processing and analysis steps required to go from raw reads coming off the sequencing machine, all the way to an appropriately filtered variant callset that can be used in downstream analyses.
For a list of curated GATK Best Practice showcase workspaces, see this page.
The first phase in all cases involves pre-processing the raw sequence data (provided in FASTQ or uBAM format) to produce analysis-ready BAM files. This involves alignment to a reference genome as well as some data cleanup operations to correct for technical biases and make the data suitable for analysis.
The next step proceeds from analysis-ready BAM files and produces variant calls. This involves identifying genomic variation in one or more individuals and applying filtering methods appropriate to the experimental design. The output is typically in VCF format although some classes of variants (such as CNVs) are difficult to represent in VCF and may therefore be represented in other structured text-based formats.
Depending on the application, additional steps such as filtering and annotation may be required to produce a callset ready for downstream genetic analysis. This typically involves using resources of known variation, truthsets and other metadata to assess and improve the accuracy of the results as well as attach additional information.
Statistical analysis and visualization in real time
The platform's integration of Jupyter notebooks expands analysis options in Terra. Notebooks are applications that contain code cells to run interactive analysis (in R or Python) and documentation cells in flexible markdown language. Coupled with BigQuery and Google Cloud Storage, notebooks enable you to run complex statistics and visualization interactively on large amounts of data, including tabular data (think medical records or wearables data). Instead of programing an analysis or visualization to run, going away while it runs, and returning to see the results, you can run the cells in your notebook and see the results immediately. And because every code cell has (ideally) documentation, they are an ideal way to collaborate and share your analysis.
Some examples of tasks you can do in a Jupyter notebook on Terra include:
If you are new to Jupyter notebooks, see this Intro to Jupyter Notebooks article.
Some benefits of analysis in a Jupyter notebook
Notebooks make it easy to record and reproduce data analysis steps
Insights in biomedical research require data analysis, but complex analysis is hard to document, share and reproduce. Notebooks enable researchers to quickly develop a rich scientific document that conducts an analysis, shows the results, and explains scientific context. Each code cell of a notebook executes commands to manipulate and explore your data. Code cells can be written in Python, R, or other languages already familiar to the researcher. It is straightforward to expand the functionality of the source code by installing pre-existing libraries, packages or modules of code in a variety of languages. Markdown cells contain formatted explanatory text, links, and images to compliment code cells. Better than "notes", Julyter notebooks mean you will never have questions you can't answer because you forgot your exact analysis steps from eight years ago.
The notebook's linear structure records each step you take in order. When shared, someone else can see how you manipulated the data and can execute the cells in order to reproduce your analysis.
They enable interactive analysis
When you “run” a code cell, output displays right away in a new cell directly underneath the original cell. Working in a notebook, it is possible to run an analysis, observe the result, then change the parameters and re-run the analysis step by step, in real time.
Notebooks extend the information content of published articles
Today, researchers can lend detail about how they derived their results and make it easy for others to reproduce or replicate their analysis by publishing notebooks as an addendum to a traditional publication. Traditional scientific journals can only capture so much detail. Most of the critical data analytics process is under-the-hood, and missing from a summary section. Seeing and executing the actual code tells so much more.
Further, when others query your notebook, they can poke around, and even build on your findings. They can easily access your methods and apply them to other populations.
They make collaborating and sharing seamless
Because Notebooks are easy to share, and self-contained, collaborating and sharing work in process and results is a simple matter of sharing a workspace.
See more on interactive analysis on Terra here.