Nextflow

ICA supports running pipelines defined using Nextflow. See this tutorial for an example.

In order to run Nextflow pipelines, there are some process-level attributes within the Nextflow definition that must be considered.

System Information

InfoDetails

Nextflow version

20.10.0 (default), 22.04.3

Executor

Kubernetes

A user can select the Nextflow version while building a pipeline using Graphical User Interface (GUI) or API.

GUI

In the GUI, a user can choose the Nextflow version from a dropdown menu in the "Create new Nextflow pipeline" Information page under "Nextflow Version" field.

API

In the API, a user can select the Nextflow version by passing the version through the optional field "pipelineLanguageVersionId". When this value is not passed, a default Nextflow version will be used for the pipeline.

Compute Node

For each compute type, the standard (default) or economy tiers can be selected, which corresponds to AWS on-demand or spot instance types, respectively. Annotation scheduler.illumina.com/lifecycle: standard and scheduler.illumina.com/lifecycle: economy.

Compute Type

To specify a compute type for a Nextflow process, use the pod directive within each process. Set the annotation to scheduler.illumina.com/presetSize and the value to the desired compute type. A list of available compute types can be found here. The default compute type, when this directive is not specified, is standard-small (2 CPUs and 8 GB of memory).

pod annotation: 'scheduler.illumina.com/presetSize', value: 'fpga-medium'

Oftentimes, there is a need to select the compute size for a process dynamically either based on user input or other factors. Given that the Kubernetes executor used on ICA does not use the cpu and memory directives, it cannot be that way. Fortunately, with Nextflow flexibility in allowing any directives to be dynamic, we can also dynamically set the pod directive, as mentioned here. e.g.

process foo {
    // Assuming that params.compute_size is set to a valid size such as 'standard-small', 'standard-medium', etc.
    pod annotation: 'scheduler.illumina.com/presetSize', value: "${params.compute_size}"
}

Additionally, it can also be specified in the configuration file. Example configuration file:

// Set the default pod
pod = [
    annotation: 'scheduler.illumina.com/presetSize',
    value     : 'standard-small'
]

withName: 'big_memory_process' {
    pod = [
        annotation: 'scheduler.illumina.com/presetSize',
        value     : 'himem-large'
    ]
}

// Use an FPGA instance for dragen processes
withLabel: 'dragen' {
    pod = [
        annotation: 'scheduler.illumina.com/presetSize',
        value     : 'fpga-medium'
    ]
}

Inputs

Inputs are specified via the input form XML. The specified code in the XML will correspond to the field in the params object that is available in the workflow. Refer to the tutorial for an example.

Outputs

Outputs for Nextflow pipelines are uploaded from the out directory in the attached shared filesystem. The publishDir directive can be used to symlink (recommended), copy or move data to the correct folder. Data will be uploaded to the ICA project after the pipeline execution completes.

publishDir 'out', mode: 'symlink'

For Nextflow version 20.10.10 on ICA, using the "copy" method in the publishDir directive for uploading output files that consume large amounts of storage may cause workflow runs to complete with missing files. The underlying issue is that file uploads may silently fail (without any error messages) during the publishDir process due to insufficient disk space, resulting in incomplete output delivery.

Workarounds:

  1. Use "symlink" instead of "copy" in the publishDir directive. Symlinking creates a link to the original file rather than copying it, which doesn’t consume additional disk space. This can prevent the issue of silent file upload failures due to disk space limitations.

  2. Use the latest version of Nextflow supported (22.04.0) and enable the "failOnError" publishDir option. This option ensures that the workflow will fail and provide an error message if there's an issue with publishing files, rather than completing silently without all expected outputs.

Nextflow Configuration

During execution, the Nextflow pipeline runner determines environment settings based on values passed via command-line or via a configuration file (see the Nextflow Configuration documentation). When creating a Nextflow pipeline, use the nextflow.config tab in the UI (also available via API) to specify a nextflow configuration file to be used when launching the pipeline.

If no Docker image is specified, Ubuntu will be used as default.

The following configuration settings will be ignored if provided as they are overridden by the system:

executor.name
executor.queueSize
k8s.namespace
k8s.serviceAccount
k8s.launchDir
k8s.projectDir
k8s.workDir
k8s.storageClaimName
k8s.storageMountPath
trace.enabled
trace.file
trace.fields
timeline.enabled
timeline.file
report.enabled
report.file
dag.enabled
dag.file

Last updated