Links

Nextflow DRAGEN Pipeline

In this tutorial, we will demonstrate how to create and launch a simple DRAGEN pipeline using the Nextflow language in ICA GUI. More information about Nextflow on ICA can be found here. For this example we will implement the alignment and variant calling example from this DRAGEN support page for Paired-End FASTQ Inputs.

Prerequisites

The first step in creating a pipeline is to select a project for the pipeline to reside in. If the project doesn't exist, create a project. For instructions on creating a project, see the Projects page. In this tutorial, we'll use a project called "Getting Started".
After a project has been created, a DRAGEN bundle must be linked to a project to obtain access to a DRAGEN docker image. Enter the project by clicking on it, and click "Edit" in the Project Details page. From here, you can link a "DRAGEN Demo Tool" bundle into the project. The bundle that is selected here will determine the DRAGEN version that you have access to. Once the bundle has been linked to your project, you can now access the docker image and version by navigating back to the "All Projects" page, clicking on "Docker Repository," and double clicking on the DRAGEN image. This will be used later in the container directive for your DRAGEN process defined in Nextflow.

Creating the pipeline

Select the project from the "Projects" view to enter the project. From the "PROJECT DETAILS" page, navigate to the "Pipelines" view under the Flow section in the left navigation pane. From the "Pipelines" view, click the "Nextflow" button to start creating a Nextflow pipeline.
In the Nextflow pipeline creation view, the "INFORMATION" tab is used to add information about the pipeline. Add values for the required Code (pipeline name) and Description fields. "Nextflow Version" and "Storage size" defaults to preassigned values.
Next we will add the Nextflow pipeline definition by navigating to the MAIN.NF tab. You will see a text editor. Copy and paste the following definition into the text editor, modifying the container directive to the docker image name and tag from the Docker Repository.
nextflow.enable.dsl = 2
process DRAGEN {
// The container must be a DRAGEN image that is included in an accepted bundle and will determine the DRAGEN version
container '079623148045.dkr.ecr.us-east-1.amazonaws.com/cp-prod/7ecddc68-f08b-4b43-99b6-aee3cbb34524:latest'
pod annotation: 'scheduler.illumina.com/presetSize', value: 'fpga-medium'
// ICA will upload everything in the "out" folder to cloud storage
publishDir 'out', mode: 'copy'
input:
tuple path(read1), path(read2)
val sample_id
path ref_tar
output:
stdout emit: result
path '*', emit: output
script:
"""
set -ex
mkdir -p /scratch/reference
tar -C /scratch/reference -xf ${ref_tar}
/opt/edico/bin/dragen --partial-reconfig HMM --ignore-version-check true
/opt/edico/bin/dragen --lic-instance-id-location /opt/instance-identity \\
--output-directory ./ \\
-1 ${read1} \\
-2 ${read2} \\
--intermediate-results-dir /scratch \\
--output-file-prefix ${sample_id} \\
--RGID ${sample_id} \\
--RGSM ${sample_id} \\
--ref-dir /scratch/reference \\
--enable-variant-caller true
"""
}
workflow {
DRAGEN(
Channel.of([file(params.read1), file(params.read2)]),
Channel.of(params.sample_id),
Channel.fromPath(params.ref_tar)
)
}
To specify a compute type for a Nextflow process, use the pod directive within each process.
Outputs for nextflow pipelines are uploaded from the out directory in the attached shared filesystem. The publishDir directive specifies the output folder for a given process. Only data moved to the out folder using the publishDir directive will be uploaded to the ICA project after the pipeline finishes executing.
Refer to the ICA help page for details on ICA specific attributes within the Nextflow definition.
Next we'll create the input form used for the pipeline. This is done through the XML CONFIGURATION tab. More information on the specifications for the input form can be found in Input Form page.
This pipeline takes two FASTQ files, one reference file and one "sample_id" parameter as input.
Paste the following XML input form into the XML CONFIGURATION text editor. Click the Generate button (at the bottom of the text editor) to preview the launch form fields.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<pd:pipeline xmlns:pd="xsd://www.illumina.com/ica/cp/pipelinedefinition" code="" version="1.0">
<pd:dataInputs>
<pd:dataInput code="read1" format="FASTQ" type="FILE" required="true" multiValue="false">
<pd:label>FASTQ Read 1</pd:label>
<pd:description>FASTQ Read 1</pd:description>
</pd:dataInput>
<pd:dataInput code="read2" format="FASTQ" type="FILE" required="true" multiValue="false">
<pd:label>FASTQ Read 2</pd:label>
<pd:description>FASTQ Read 2</pd:description>
</pd:dataInput>
<pd:dataInput code="ref_tar" format="TAR" type="FILE" required="true" multiValue="false">
<pd:label>Reference</pd:label>
<pd:description>Reference TAR</pd:description>
</pd:dataInput>
</pd:dataInputs>
<pd:steps>
<pd:step execution="MANDATORY" code="General">
<pd:label>General</pd:label>
<pd:description></pd:description>
<pd:tool code="generalparameters">
<pd:label>General Parameters</pd:label>
<pd:description></pd:description>
<pd:parameter code="sample_id" minValues="1" maxValues="1" classification="USER">
<pd:label>Sample ID</pd:label>
<pd:description></pd:description>
<pd:stringType/>
<pd:value></pd:value>
</pd:parameter>
</pd:tool>
</pd:step>
</pd:steps>
</pd:pipeline>
Click the Save button to save the changes.
The dataInputs section specify file inputs, which will be mounted when the workflow executes. Parameters defined under the steps section refer to string and other input types.
Each of the dataInputs and parameters can be accessed in the Nextflow within the workflow's params object named according ot the code defined in the XML (e.g. params.sample_id).

Running the pipeline

Go to the pipelines page from the left navigation pane. Select the pipeline you just created and click Start New Analysis.
Fill in the required fields indicated by red "*" sign and click on "Start Analysis" button.
You can monitor the run from the analysis page.
Once the "Status" changes to "Succeeded", you can click on the run to access the results page.