Workflows can fail for a lot of different reasons. If you're lucky, you just have the wrong inputs plugged into the configuration, which is typically fast to fail and fast to fix. More complex are errors in the workflow code: bugs, or limitations in the analysis software package you're calling on. And of course you can fall victim to transient errors when something goes wrong on the Google cloud itself (even giants sometimes stumble). The information below will help you drill down to find the root cause of errors, so you can be up and running quickly.
- High-level status on submissions
- Workflow-level errors
- More detail, and metadata, on each task
- sterr and stout logs
1. High-level status on submissions
You can find high-level status on the success or failure of your submissions, along with a few columns of metadata, in the Job History tab. You’ll see a list of all the workflow or workflows within the submission. For details about a particular submission, click on the link in the "Submission" column.
2. Workflow-level status
That will take you to this screen, which contains further details about workflows within the submission and their status (failed, queued, etc.). For more detailed errors within specific workflows, click the “View” link to open the Job Manager interface:
3. Job Manager: more detail, and metadata, on individual tasks
If it's not immediately obvious what failed, the best sources of information are log files, which you can access directly in the Job Manager interface. Here you can preview the tail end of your logs by clicking on the icons (shown here in the card view):
Shown here in the list view:
Click on the icons to [preview the log, with an option to expand to the full log.
4. stdout and sterr logs
The log functions, going from left to right icons, are defined below:
- Backend (Cromwell) log - A step-by-step report of actions during the execution of the task. These details include information about Docker setup, localization (the step of copying files from your google bucket into the Docker container), stdout from tools run within the command block of the task, and finally, the delocalization and Docker shutdown steps.
- Execution directory - Clicking on this icon will redirect you to the exact folder/directory where you can find your stderr, stdout, and backend logs in the Google cloud storage bucket. From there, you can open those files to view their contents or you can download them. If your task generates outputs, this directory is where you can find them as well.
- Compute details - A report of the actions taken on the Google side by the Pipelines API (PAPI) to execute the task, including things like the request we send to Google, the exact events as tracked by Google, timestamps of what happened when, and if there were errors. This is where you would find information about errors that are unrelated to the WDL code or configuration. This information is great for debugging when failures happen before a task starts or after a task completes.