LogoLogo
Illumina Connected Software
  • Introduction
  • Get Started
    • About the Platform
    • Get Started
  • Home
    • Projects
    • Bundles
    • Event Log
    • Metadata Models
    • Docker Repository
    • Tool Repository
    • Storage
      • Connect AWS S3 Bucket
        • SSE-KMS Encryption
  • Project
    • Data
      • Data Integrity
    • Samples
    • Activity
    • Flow
      • Reference Data
      • Pipelines
        • Nextflow
        • CWL
        • XML Input Form
        • 🆕JSON-Based input forms
          • InputForm.json Syntax
          • JSON Scatter Gather Pipeline
        • Tips and Tricks
      • Analyses
    • Base
      • Tables
        • Data Catalogue
      • Query
      • Schedule
      • Snowflake
    • Bench
      • Workspaces
      • JupyterLab
      • 🆕Bring Your Own Bench Image
      • 🆕Bench Command Line Interface
      • 🆕Pipeline Development in Bench (Experimental)
        • Creating a Pipeline from Scratch
        • nf-core Pipelines
        • Updating an Existing Flow Pipeline
      • 🆕Containers in Bench
      • FUSE Driver
    • Cohorts
      • Create a Cohort
      • Import New Samples
      • Prepare Metadata Sheets
      • Precomputed GWAS and PheWAS
      • Cohort Analysis
      • Compare Cohorts
      • Cohorts Data in ICA Base
      • Oncology Walk-through
      • Rare Genetic Disorders Walk-through
      • Public Data Sets
    • Details
    • Team
    • Connectivity
      • Service Connector
      • Project Connector
    • Notifications
  • Command-Line Interface
    • Installation
    • Authentication
    • Data Transfer
    • Config Settings
    • Output Format
    • Command Index
    • Releases
  • Sequencer Integration
    • Cloud Analysis Auto-launch
  • Tutorials
    • Nextflow Pipeline
      • Nextflow DRAGEN Pipeline
      • Nextflow: Scatter-gather Method
      • Nextflow: Pipeline Lift
        • Nextflow: Pipeline Lift: RNASeq
      • Nextflow CLI Workflow
    • CWL CLI Workflow
      • CWL Graphical Pipeline
      • CWL DRAGEN Pipeline
      • CWL: Scatter-gather Method
    • Base Basics
      • Base: SnowSQL
      • Base: Access Tables via Python
    • Bench ICA Python Library
    • API Beginner Guide
    • Launch Pipelines on CLI
      • Mount projectdata using CLI
    • Data Transfer Options
    • Pipeline Chaining on AWS
    • End-to-End User Flow: DRAGEN Analysis
  • Reference
    • Software Release Notes
      • 2025
      • 2024
      • 2023
      • 2022
      • 2021
    • Document Revision History
      • 2025
      • 2024
      • 2023
      • 2022
    • Known Issues
    • API
    • Pricing
    • Security and Compliance
    • Network Settings
    • ICA Terminology
    • Resources
    • Data Formats
    • FAQ
Powered by GitBook
On this page
  • Introduction
  • Preparation
  • Import nf-core Pipeline to Bench
  • Run Validation Test in Bench
  • Deploy as Flow Pipeline
  • Run Validation Test in Flow
  • Hints

Was this helpful?

Export as PDF
  1. Project
  2. Bench
  3. Pipeline Development in Bench (Experimental)

nf-core Pipelines

PreviousCreating a Pipeline from ScratchNextUpdating an Existing Flow Pipeline

Last updated 4 days ago

Was this helpful?

Introduction

This tutorial shows you how to

    • the execution

  • .

Preparation

  • Start Bench workspace

    • For this tutorial, the instance size depends on the flow you import, and whether you use a Bench cluster:

      • If using a cluster, choose standard-small or standard-medium for the workspace master node

      • Otherwise, choose at least standard-large as nf-core pipelines often need more than 4 cores to run.

    • Select the single user workspace permissions (aka "Access limited to workspace owner "), which allows us to deploy pipelines

    • Specify at least 100GB of disk space

  • Optional: After choosing the image, enable a cluster with at least this one standard-largeinstance type

  • Start the workspace, then (if applicable) start the cluster

Import nf-core Pipeline to Bench

mkdir demo
cd demo
pipeline-dev import-from-nextflow nf-core/demo

If conda and/or nextflow are not installed, pipeline-dev will offer to install them.

The Nextflow files are pulled into the nextflow-src subfolder.

A larger example that still runs quickly is nf-core/sarek

Result

/data/demo $ pipeline-dev import-from-nextflow nf-core/demo

Creating output folder nf-core/demo
Fetching project nf-core/demo

Fetching project info
project name: nf-core/demo
repository  : https://github.com/nf-core/demo
local path  : /data/.nextflow/assets/nf-core/demo
main script : main.nf
description : An nf-core demo pipeline
author      : Christopher Hakkaart

Pipeline “nf-core/demo” successfully imported into nf-core/demo.

Suggested actions:
  cd nf-core/demo
  pipeline-dev run-in-bench
  [ Iterative dev: Make code changes + re-validate with previous command ]
  pipeline-dev deploy-as-flow-pipeline
  pipeline-dev launch-validation-in-flow

Run Validation Test in Bench

All nf-core pipelines conveniently define a "test" profile that specifies a set of validation inputs for the pipeline.

The following command runs this test profile. If a Bench cluster is active, it runs on your Bench cluster, otherwise it runs on the main workspace instance.

cd nf-core/demo
pipeline-dev run-in-bench

The pipeline-dev tool is using "nextflow run ..." to run the pipeline. The full nextflow command is printed on stdout and can be copy-pasted+adjusted if you need additional options.

Result

Monitoring

When a pipeline is running locally (i.e. not on a Bench cluster), you can monitor the task execution from another terminal with docker ps

When a pipeline is running on your Bench cluster, a few commands help to monitor the tasks and cluster. In another terminal, you can use:

  • qstat to see the tasks being pending or running

  • tail /data/logs/sge-scaler.log.<latest available workspace reboot time> to check if the cluster is scaling up or down (it currently takes 3 to 5 minutes to get a new node)

Data Locations

  • The output of the pipeline is in the outdir folder

  • Nextflow work files are under the work folder

  • Log files are .nextflow.log* and output.log

Deploy as Flow Pipeline

pipeline-dev deploy-as-flow-pipeline

After generating a few ICA-specific files (JSON input specs for Flow launch UI + list of inputs for next step's validation launch), the tool identifies which previous versions of the same pipeline have already been deployed (in ICA Flow, pipeline versioning is done by including the version number in the pipeline name, so that's what is checked here). It then asks if you want to update the latest version or create a new one.

Choose "3" and enter a name of your choice to avoid conflicts with other users following this same tutorial.

Choice: 3
Creating ICA Flow pipeline dev-nf-core-demo_v4
Sending inputForm.json
Sending onRender.js
Sending main.nf
Sending nextflow.config

At the end, the URL of the pipeline is displayed. If you are using a terminal that supports it, Ctrl+click or middle-click can open this URL in your browser.

Run Validation Test in Flow

pipeline-dev launch-validation-in-flow

This launches an analysis in ICA Flow, using the same inputs as the nf-core pipeline's "test" profile.

Some of the input files will have been copied to your ICA project to allow the launch to take place. They are stored in the folder bench-pipeline-dev/temp-data.

Hints

Using older versions of Nextflow

Some older nf-core flows are still using DSL1, which is only working up to Nextflow 22.

An easy solution is to create a conda environment for nextflow 22:

conda create -n nextflow22
 
# If, like me, you never ran "conda init", do it now:
conda init
bash -l # To load the conda's bashrc changes
 
conda activate nextflow22
conda install -y nextflow=22
 
# Check
nextflow -version
 
# Then use the pipeline-dev tools as in the demo
🆕
Import any nf-core pipeline from their public repository.
Run the pipeline in Bench.
monitor
Deploy pipeline as an ICA Flow pipeline
Launch Flow validation test from Bench.