Nextflow Pipeline

In this tutorial, we will show how to create and launch a pipeline using the Nextflow language in ICA.

This tutorial references the Basic pipeline example in the Nextflow documentation.

Create the pipeline

The first step in creating a pipeline is to create a project. For instructions on creating a project, see the Projects page. In this tutorial, we'll use a project called "Getting Started".

After creating the project, select the project from the Projects view to enter the project. Within the project, navigate to the Flow > Pipelines view in the left navigation pane. From the Pipelines view, click +Create Pipeline and then Nextflow to start creating the Nextflow pipeline.

In the Nextflow pipeline creation view, the Information tab is used to add information about the pipeline. Add values for the required Code (unique pipeline name) and Description fields.

Next we'll add the Nextflow pipeline definition. The pipeline we're creating is a modified version of the Basic pipeline example from the Nextflow documentation. Modifications to the pipeline definition from the nextflow documentation include:

  • Add the container directive to each process with the latest ubuntu image. If no Docker image is specified, public.ecr.aws/lts/ubuntu:22.04_stable is used as default.

  • Add the publishDir directive with value 'out' to the reverse process.

  • Modify the reverse process to write the output to a file test.txt instead of stdout.

The description of the pipeline from the linked Nextflow docs:

This example shows a pipeline that is made of two processes. The first process receives a FASTA formatted file and splits it into file chunks whose names start with the prefix seq_.

The process that follows, receives these files and it simply reverses their content by using the rev command line tool.

Resources: For each process, you can use the memory directive and cpus directive to set the Compute Types. ICA will then determine the best matching compute type based on those settings. Suppose you set memory '10240 GB' and cpus 6, then ICA will determine you need standard-large ICA Compute Type.

Syntax example:

process iwantstandardsmallresources {
    cpus 2
    memory '8 GB'
    ...

Navigate to the Nextflow files > main.nf tab to add the definition to the pipeline. Since this is a single file pipeline, we won't need to add any additional definition files. Paste the following definition into the text editor:

#!/usr/bin/env nextflow

params.in = "$HOME/sample.fa"

sequences = file(params.in)
SPLIT = (System.properties['os.name'] == 'macOS' ? 'gcsplit' : 'csplit')

process splitSequences {

    container 'public.ecr.aws/lts/ubuntu:22.04'

    input:
    file 'input.fa' from sequences

    output:
    file 'seq_*' into records

    """
    $SPLIT input.fa '%^>%' '/^>/' '{*}' -f seq_
    """

}

process reverse {
    
    container 'public.ecr.aws/lts/ubuntu:22.04'
    publishDir 'out'

    input:
    file x from records
    
    output:
    file 'test.txt'

    """
    cat $x | rev > test.txt
    """
}

Next we'll create the input form used when launching the pipeline. This is done through the XML Configuration tab. Since the pipeline takes in a single FASTA file as input, the XML-based input form will include a single file input.

Paste the below XML input form into the XML CONFIGURATION text editor. Click the Generate button to preview the launch form fields.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<pd:pipeline xmlns:pd="xsd://www.illumina.com/ica/cp/pipelinedefinition">
    <pd:dataInputs>
        <pd:dataInput code="in" format="FASTA" type="FILE" required="true" multiValue="false">
            <pd:label>in</pd:label>
            <pd:description>fasta file input</pd:description>
        </pd:dataInput>
    </pd:dataInputs>
    <pd:steps/>
</pd:pipeline>

With the definition added and the input form defined, the pipeline is complete.

On the Documentation tab, you can fill out additional information about your pipeline. This information will be presented under the Documentation tab whenever a user starts a new analysis on the pipeline.

Click the Save button at the top right. The pipeline will now be visible from the Pipelines view within the project.

Launch the pipeline

Before we launch the pipeline, we'll need to upload a FASTA file to use as input. In this tutorial, we'll use a public FASTA file from the UCSC Genome Browser. Download the chr1_GL383518v1_alt.fa.gz file and unzip to decompress the FASTA file.

To upload the FASTA file to the project, first navigate to the Data section in the left navigation pane. In the Data view, drag and drop the FASTA file from your local machine into the indicated section in the browser. Once the file upload completes, the file record will show in the Data explorer. Ensure that the format of the file is set to "FASTA".

Now that the input data is uploaded, we can proceed to launch the pipeline. Navigate to the Analyses view and click the button to Start Analysis. Next, select your pipeline from the list. Alternatively you can start your pipeline from Projects > your_project > Flow > Pipelines > Start new analysis.

In the Launch Pipeline view, the input form fields are presented along with some required information to create the analysis.

  • Enter a User Reference (identifier) for the analysis. This will be used to identify the analysis record after launching.

  • Set the Entitlement Bundle (there will typically only be a single option).

  • In the Input Files section, select the FASTA file for the single input file. (chr1_GL383518v1_alt.fa)

  • Set the Storage size to small. This will attach a 1.2TB shared file system to the environment used to run the pipeline.

With the required information set, click the button to Start Analysis.

Monitor Analysis

After launching the pipeline, navigate to the Analyses view in the left navigation pane.

The analysis record will be visible from the Analyses view. The Status will transition through the analysis states as the pipeline progresses. It may take some time (depending on resource availability) for the environment to initialize and the analysis to move to the In Progress status.

Click the analysis record to enter the analysis details view.

Once the pipeline succeeds, the analysis record will show the "Succeeded" status. Do note that this may take considerable time if it is your first analysis because of the required resource management. (in our example, the analysis took 28 minutes)

From the analysis details view, the logs produced by each process within the nextflow pipeline are accessible via the Logs tab.

View Results

Analysis outputs are written to an output directory in the project with the naming convention {Analysis User Reference}-{Pipeline Code}-{GUID}. (1)

Inside of the analysis output directory are the files output by the analysis processes written to the 'out' directory. In this tutorial, the file test.txt (2) is written to by the reverse process. Navigating into the analysis output directory, clicking into the test.txt file details, and opening the VIEW tab (3) shows the output file contents.

The "Download" button (4) can be used to download the data to the local machine.

Last updated