Links

Analyses

An Analysis is the long-running execution of a pipeline.

Lifecycle

Status
Description
Terminal
Awaiting Input
-
No
Requested
Analysis has been queued
No
In Progress
Analysis execution is in progress
No
Aborting
Analysis has been requested to be aborted
No
Aborted
Analysis is aborted
Yes
Failed
Analysis has finished with error
Yes
Succeeded
Analysis has finished with success
Yes
When an analysis is started, the availability of resources may impact the start time of the pipeline or specific steps after execution has started. Analyses are subject to delay when the system is under high load and the availability of resources is limited.

Logs

During the execution of an analysis, logs are produced for each process involved in the analysis lifecyle. In the analysis details view, the Logs tab is used to view the logs in near real time as they're produced in the running processes. A grid layout is used for analyses with more than 50 steps, a tiled view for analyses with 50 steps or less, though you can choose to also use the grid layout for those by means of the tile/grid button on the top right of the analysis log tab.
There are system processes involved in the lifeycle for all analyses (ie. downloading inputs, uploading outputs, etc.) and there are processes which are pipeline specific, such as processes which execute the pipeline steps. The below table describes the system processes.
Process
Description
Setup Environment
Validate analysis execution environment is prepared
Run Monitor
Monitor resource usage for billing and reporting
Prepare Input Data
Download and mount input data to the shared file system
Pipeline Runner
Parent process to execute the pipeline definition
Finalize Output Data
Upload Output Data
Additional log entries will show for the processes which execute the steps defined in the pipeline.
Each process shows as a distinct entry in the Logs view with a Queue Date, Start Date, and End Date.
Timestamp
Description
Queue Date
The time when the process is submitted to the processes scheduler for execution
Start Date
The time when the process has started exection
End Date
The time when the process has stopped execution
The time between the Start Date and the End Date is used to calculate the Duration. The time of the Duration is used to calculate the usage-based cost for the analysis.
Each log entry in the Logs view contains a checkbox to view the stdout and stderr log files for the process. Clicking a checkbox adds the log as a tab to the log viewer where the log text is displayed and made available for download.

Log Streaming

Logs can also be streamed using websocket client tooling. The API to retrieve analysis step details returns websocket URLs for each step to stream the logs from stdout/stderr during the step's execution. Upon completion, the websocket URL is no longer available.

Analysis Output Mappings

Currently, this feature is only availabe when launching Nextflow analyses via API.
Currently, only FOLDER type output mappings are supported
By default, analysis outputs are directed to a new folder within the project where the analysis is launched. Analysis output mappings may be specified to redirect outputs to user-specified locations consisting of project and path. An output mapping consists of:
  • the source path on the local disk of the analysis execution environment, relative to the working directory
  • the data type, either FILE or FOLDER
  • the target project ID to direct outputs to; analysis launcher must have contributor access to the project
  • the target path relative to the root of the project data to write the outputs
Use the example analysis output mapping below for guidance.
If the output directory already exists, any existing contents with the same filenames as those output from the pipeline will be overwritten by the new analysis
In this example, 2 analysis output mappings are specified. The analysis writes data during execution in the working directory at paths out/test and out/test2. The data contained in these folders are directed to project with ID 4d350d0f-88d8-4640-886d-5b8a23de7d81 and at paths /output-testing-01/ and /output-testing-02/ respectively, relative to the root of the project data.
The following demonstrates the construction of the request body to start an analysis with the output mappings described above:
```json
{
...
"analysisOutput":
[
{
"sourcePath": "out/test1",
"type": "FOLDER",
"targetProjectId": "4d350d0f-88d8-4640-886d-5b8a23de7d81",
"targetPath": "/output-testing-01/"
},
{
"sourcePath": "out/test2",
"type": "FOLDER",
"targetProjectId": "4d350d0f-88d8-4640-886d-5b8a23de7d81",
"targetPath": "/output-testing-02/"
}
]
}
```
When the analysis completes the outputs can be seen in the ICA UI, within the folders designated in the payload JSON during pipeline launch (output-testing-01 and output-testing-02).
analysis_result_02.png

Linking

Linking to individual analyses and workflow sessions is done by using the syntax <hostURL>/ica/link/project/<projectUUID>/analysis/<analysisUUID> and <hostURL>/ica/link/project/<projectUUID>/workflowSession/<workflowsessionUUID>