Default runtime attributes for workflow submissions

Jason Cerrato
  • Updated

Runtime attributes can be configured within a WDL's task block, but if no value is specified for any one of the customizable attributes, your task will run with the default runtime attribute value. The chart below shows the defaults for each runtime attribute, and how to customize to a value of your choice.

For more information on runtime attributes, please see Cromwell's Runtime Attribute Descriptions documentation.

Terra supports the following CPU platforms:

  • Intel Cascade Lake (n2)
  • Intel Skylake (n1)
  • Intel Broadwell (n1)
  • Intel Haswell (n1)
  • Intel Ivy Bridge (n1)
  • Intel Sandy Bridge (n1)
  • AMD Rome (n2d)

Terra requests a custom Google virtual machine (VM) based on your specification. Please see the notes for CPU and memory to learn how Google interprets your resource request. See Google's documentation for more information about their machine families and rules for custom machines.

Runtime attribute

Default How to customize 
Region of Workflow VM us-central1

runtime {
    zones: "us-east1-c
}

To learn more about available regions, click here

Zone of Workflow VM us-central1-b

runtime {
    zones: "us-central1-c us-central1-f"
}

To learn more about available zones, click here

CPU 1

runtime {
    cpu: 2
}

For more details on CPU quotas, see How to troubleshoot and fix stalled workflows

Google Cloud interprets this as the minimum number of cores to use. If you request more memory than your CPU resource request can handle, Google  automatically bumps up to the appropriate number of CPUs.

n1 machines can have 1-6.5 GB of memory per CPU.

n2 and n2d machines can have 0.5-8 GB of memory per CPU.

cpuPlatform Whichever variety of CPU Platform n1 machine Google has available at time of request

runtime {
    cpuPlatform: "Intel Cascade Lake"
}

memory 2G

runtime {
    memory: "4G"
}

 

Google Cloud interprets this as the minimum amount of memory to use. If you request more CPUs than makes sense for your memory request, Google automatically bumps up to the appropriate amount of memory. Example: You cannot request 8 CPUs and 1 GB memory, since each CPU needs between 1-6.5 GB of memory.

disks 10GB SSD

runtime {
    disks: "local-disk 100 HDD"
}

bootDiskSizeGb 10GB

runtime {
    # Yikes, we have a big OS in this docker image! Allow 50GB to hold it:
    bootDiskSizeGb: 50
}

maxRetries 0 runtime {
    maxRetries: 3
}
preemptible 0

runtime {
    preemptible: 1
}

For more details on saving costs with preemptible VMs, see Controlling cloud costs - sample use cases

GPUs not used

runtime {
      gpuType: "nvidia-tesla-k80"
      gpuCount: 2
      nvidiaDriverVersion: "418.87.00"
      zones: ["us-central1-c"]
}

For more details on GPUs, see Why and how to use GPUs when running a workflow

 

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request

Comments

0 comments

Please sign in to leave a comment.