The Job History tab is your workspace operations dashboard, where you can check the status of past and current workflow submissions, drill down to see what’s going on and find direct links to all input and output files involved. This article walks you through the functions you will find.
Workflow submission structure
Your analyses are organized and reported on with a hierarchical structure (from top to bottom below)
Submission: a collection of workflows submitted in one batch
Workflow: a particular run of a workflow/method on a specific dataset
Task: the lowest level of analysis reporting representing individual
calls/jobs made during workflow execution
By default the Job History page lists all submissions that have been made so far in a workspace along with their status. It is possible to filter the list, but you cannot delete any submissions. Similarly, it is not possible to delete workflows within a submission. For guidance about deleting files that belong to past submissions, please see the forum.
When you launch a new submission, you'll be redirected to the Job History page automatically. Here you'll see a list of workflows in the submission, along with their current status and links to further information. Let’s assume we just launched an analysis and want more detail about the status of a particular workflow.
Monitoring submission status
The workflows can be in the following states: Queued, Launching, Submitted, Running, Aborting, Succeeded, Failed, or Aborted. To find out what each of these mean, read on!
Queued, Launching or Submitted
In these states, the workflows are being handed off from Terra to Cromwell. More on this later…
When running, the commands specified in the method WDL script are being executed on virtual machines.
The Happy Path diagram below shows how each task status affects the workflow and analysis submission status -- in the best case scenario where all your tasks and workflows succeed.
Aborted and Aborting
These will display for analysis submissions, workflows, and tasks if you have requested a workflow be aborted. These are not pictured here.
When all the tasks reach Done successfully, the workflow will be updated to Succeeded, and the Job History page will show the submission as Done. You can look for outputs in the data table (in the Data tab) if you configured outputs to write to the data table (
Remember “this” refers to what data entity you are running your workflow on. If you chose participant and called an output
this.participant_file, a new column will display in the participant section called
participant_file and a link to the output will be put there. The actual output file will be saved to the workspace bucket.
You can also view outputs from the Job Manager page from the Job History page by clicking on the succeeded submission (far left in the Job History list page)
1. Select Succeeded from the Submission status dropdown.
2. Click on the Job Manager icon at the right.
3. In the Job Manager page, access Outputs by selecting the icon under Outputs (1). You can also access details in the backend log and execution directory by selecting from the icons at the right (2).
The red triangle icon means your workflow submission failed to run or complete for some reason. Never fear! It happens to everyone. To learn more about how to troubleshoot errors and achieve successful submissions, see Troubleshooting workflows: Tips and tricks.
Submissions: what happens behind the scenes
Under the hood, quite a lot is happening when you launch an analysis: various system components kick into gear to ensure that your submission of one or more workflows gets properly assembled and, when that’s done, that individual task gets dispatched to the Google Compute Engine for execution. Meanwhile on the surface, Terra automatically takes you to the Job History page where you can view the status of your workflow(s) and monitor how the work is progressing (note that you need to refresh the browser window to update the status). If systems could talk, it would kind of look like this:
1. Terra takes the workflow specified in the WDL and asks Cromwell to run it.
2. Cromwell asks the Google Pipelines API (PAPI) to launch each task in the workflow when the inputs become available. Cromwell is responsible for managing the sequence of the tasks/jobs.
3. PAPI starts a virtual machine (VM) per task and provides the inputs; the WDL specifies what it should do, the environment to do it in (the Docker image), and requests the outputs when it is done. Each virtual machine’s (VM) requirements can be specified in the task (RAM, disk space, memory size, number of CPUs). Once the task is done, PAPI will shut down the VM.
4. The Docker required for each task will be pulled to the virtual machine along with any inputs from Google buckets. When the output is produced, it will be put in the Google bucket of the workspace where the analysis was launched. Links to the outputs will be written back to the workspace data table.