LogoLogo
Illumina Connected Software
  • Introduction
  • Get Started
    • About the Platform
    • Get Started
  • Home
    • Projects
    • Bundles
    • Event Log
    • Metadata Models
    • Docker Repository
    • Tool Repository
    • Storage
      • Connect AWS S3 Bucket
        • SSE-KMS Encryption
  • Project
    • Data
      • Data Integrity
    • Samples
    • Activity
    • Flow
      • Reference Data
      • Pipelines
        • Nextflow
        • CWL
        • XML Input Form
        • 🆕JSON-Based input forms
          • InputForm.json Syntax
          • JSON Scatter Gather Pipeline
        • Tips and Tricks
      • Analyses
    • Base
      • Tables
        • Data Catalogue
      • Query
      • Schedule
      • Snowflake
    • Bench
      • Workspaces
      • JupyterLab
      • 🆕Bring Your Own Bench Image
      • 🆕Bench Command Line Interface
      • 🆕Pipeline Development in Bench (Experimental)
        • Creating a Pipeline from Scratch
        • nf-core Pipelines
        • Updating an Existing Flow Pipeline
      • 🆕Containers in Bench
      • FUSE Driver
    • Cohorts
      • Create a Cohort
      • Import New Samples
      • Prepare Metadata Sheets
      • Precomputed GWAS and PheWAS
      • Cohort Analysis
      • Compare Cohorts
      • Cohorts Data in ICA Base
      • Oncology Walk-through
      • Rare Genetic Disorders Walk-through
      • Public Data Sets
    • Details
    • Team
    • Connectivity
      • Service Connector
      • Project Connector
    • Notifications
  • Command-Line Interface
    • Installation
    • Authentication
    • Data Transfer
    • Config Settings
    • Output Format
    • Command Index
    • Releases
  • Sequencer Integration
    • Cloud Analysis Auto-launch
  • Tutorials
    • Nextflow Pipeline
      • Nextflow DRAGEN Pipeline
      • Nextflow: Scatter-gather Method
      • Nextflow: Pipeline Lift
        • Nextflow: Pipeline Lift: RNASeq
      • Nextflow CLI Workflow
    • CWL CLI Workflow
      • CWL Graphical Pipeline
      • CWL DRAGEN Pipeline
      • CWL: Scatter-gather Method
    • Base Basics
      • Base: SnowSQL
      • Base: Access Tables via Python
    • Bench ICA Python Library
    • API Beginner Guide
    • Launch Pipelines on CLI
      • Mount projectdata using CLI
    • Data Transfer Options
    • Pipeline Chaining on AWS
    • End-to-End User Flow: DRAGEN Analysis
  • Reference
    • Software Release Notes
      • 2025
      • 2024
      • 2023
      • 2022
      • 2021
    • Document Revision History
      • 2025
      • 2024
      • 2023
      • 2022
    • Known Issues
    • API
    • Pricing
    • Security and Compliance
    • Network Settings
    • ICA Terminology
    • Resources
    • Data Formats
    • FAQ
Powered by GitBook
On this page
  • Recommended Practices
  • Data Management
  • Viewing Data
  • Hyperlinking to Data
  • Uploading Data
  • Copying Data
  • Move Data
  • Download Data
  • Export Project Data Information
  • Archiving and Deleting files
  • Link Project Data
  • Unlinking Project Data
  • Non-indexed Folders

Was this helpful?

Export as PDF
  1. Project

Data

PreviousSSE-KMS EncryptionNextData Integrity

Last updated 3 days ago

Was this helpful?

The Data section gives you access to the files and folders stored in the project as well as those linked to the project. Here, you can perform searches and data management operations such as moving, copying, deleting and (un)archiving.

Recommended Practices

File/Folder Naming

ICA supports UTF-8 characters in file and folder names for data. Please follow the guidelines detailed below. (For more information about recommended approaches to file naming that can be applicable across platforms, please refer to the .)

The length of the file name (minus prefixes and delimiters) is ideally limited to 32 characters.

Characters generally considered "safe"
  • Alphanumeric characters

    • 0-9

    • a-z

    • A-Z

  • Special characters

    • Exclamation point !

    • Hyphen -

    • Underscore _

    • Period .

    • Asterisk *

    • Single quote '

    • Open parenthesis (

    • Closed parenthesis )

Folders cannot be renamed after they have been created. To rename a folder, you will need to create a new folder with the desired name, move the contents from the original folder into the new one, and then delete the original folder. Please see section for more information.

Troubleshooting

If you get an error "Unable to generate credentials from the objectstore as the requested path is too long." from AWS when requesting temporary credentials, then the path should be shortened.

You can truncate the sample name and user reference or use advanced output mapping in the API which avoids generating the long folders and creates output in the targetPath-defined location.

"analysisOutput": [ { "sourcePath": "out", "type": "FOLDER", "targetProjectId": "enter_your_target_project_id", "targetPath": "/enter_your_target_folder/" }

Data Formats

See the list of supported

Data Privacy

Data Integrity

Data Management

To prevent cost issues, you can not perform actions such as copying and moving data which would write data to the workspace when the project billing mode is set to tenant and the owning tenant of the folder is not the current user's tenant.


Viewing Data

On the Projects > your_project > Data page, you can view file information and preview files.

To view file details click on the filename to see the file details.

  • Run input tags identify the last 100 pipelines which used this file as input.

  • Connector tags indicate if the file was added via browser upload or connector.

To view file contents, select the checkbox at the begining of the line and then select View from the top menu. Alternatively, you can first click on the filename to see the details and then click view to preview the file.

When you share the data view by sharing the link from your browser, filters and sorting is retained in links, so the reciepient will see the same data and order.

To see the ongoing actions (copying from, copying to, moving from, moving to) on data in the data overview (Projects > your_project > Data), add the ongoing actions column from the column list. This contains a list of ongoing actions sorted by when they were created. You can also consult the data detail view for ongoing actions by clicking on the data in the overview. When clicking on an ongoing action itself, the data job details of the most recent created data job are shown.

For folders, the list of ongoing actions is displayed on top left of the folder details. When clicking the list, the data job details are shown of the most recent created data job of all actions.

Secondary Data

When Secondary Data is added to a data record, those secondary data records are mounted in the same parent folder path as the primary data file when the primary data file is provided as an input to a pipeline. Secondary data is intended to work with the CWL secondaryFiles feature. This is commonly used with genomic data such as BAM files with companion BAM index files (refer to https://www.ncbi.nlm.nih.gov/tools/gbench/tutorial6/ for an example).


Hyperlinking to Data

To hyperlink to data, use the following syntax:

https://<ServerURL>/ica/link/project/<ProjectID>/data/<FolderID>
https://<ServerURL>/ica/link/project/<ProjectID>/analysis/<AnalysisID>
Variable
Location

ServerURL

see browser addres bar

projectID

At YourProject > Details > URN > urn:ilmn:ica:project:ProjectID#MyProject

FolderID

At YourProject > Data > folder > folder details > ID

AnalysisID

At YourProject > Flow > Analyses > YourAnalysis > ID

Normal permission checks still apply with these links. If you try to follow a link to data to which you do not have access, you will be returned to the main project screen or login screen, depending on your permissions.


Uploading Data

Uploading data to the platform makes it available for consumption by analysis workflows and tools.

UI Upload

To upload data manually via the drag-and-drop interface in the platform UI, go to Projects > your_project > Data and either

  • Drag a file from your system into the Choose a file or drag it here box.

  • Select the Choose a file or drag it here box, and then choose a file. Select Open to upload the file.

Your files are added to the Data page with status partial during upload and become available when upload completes.

Do not close the ICA tab in your browser while data uploads.

Upload Data via CLI


Copying Data

You can copy data from the same project to a different folder or from another project to which you have access.

In order to copy data, the following rights must be assigned to the person copying the data:

Copy Data Rights
Source Project
Destination Project

Within a project

  • Contributor rights

  • Upload and Download rights

  • Contributor rights

  • Upload and Download rights

Between different projects

  • Download rights

  • Viewer rights

  • Upload rights

  • Contributor rights

The following restrictions apply when copying data:

Copy Data Restrictions
Source Project
Destination Project

Within a project

  • No linked data

  • No partial data

  • No archived data

  • No Linked data

Between different projects

  • Data sharing enabled

  • No partial data

  • No archived data

  • Within the same region

  • No linked data

  • Within the same region

Data in the "Partial" or "Archived" state will be skipped during a copy job.

To use data copy:

  1. Go to the destination project for your data copy and proceed to Projects > your_project > Data > Manage > Copy From.

  2. Optionally, use the filters (Type, Name, Status, Format or additional filters) to filter out the data or search with the search box.

  3. Select the data (individual files or folders with data) you want to copy.

  4. Select any meta data which you want to keep with the copied data (user tags, technical system tags or instrument information).

  5. Select which action to take if the data already exists (overwrite exsiting data, don't copy or keep both the original and the new copy by appending a number to the copied data).

  6. Select Copy Data to copy the data to your project. You can see the progress in Projects > your_project > Activity > Batch Jobs and if your browser permits it, a pop-up message will be displayed whan the copy process completes.

The outcome can be

  • INITIALIZED

  • WAITING_FOR_RESOURCES

  • RUNNING

  • STOPPED - When choosing to stop the batch job.

  • SUCCEEDED - All files and folders are copied.

  • PARTIALLY_SUCCEEDED - Some files and folders could be copied, but not all. Partially succeeded will typically occur when files were being modified or unavailable while the copy process was running.

  • FAILED - None of the files and folders could be copied.

To see the ongoing actions on data in the data overview (Projects > your_project > Data), you can add the ongoing actions column from the column list with the three column symbol at the top right, next to the filter funnel. You can also consult the data detail view for ongoing actions by clicking on the data in the overview.

There is a difference in copy type behavior between copying files and folders. The behavior is designed for files and it is best practice to not copy folders if there already is a folder with the same name in the destination location.

Notes on copying data

  • Copying data comes with an additional storage cost as it will create a copy of the data.

  • You can copy over the same data multiple times.

  • On the command-line interface, the command to copy data is icav2 projectdata copy.


Move Data

You can move data both within a project and between different projects to which you have access. If you allow notifications from your browser, a pop-up will appear when the move is completed.

  • Move From is used when you are in the destination location.

  • Move To is used when you are in the source location. Before moving the data, pre-checks are performed to verify that the data can be moved and no currently running operations are being performed on the folder. Conflicting jobs and missing permissions will be reported. Once the move has started, no other operation should be performed on the data being moved to avoid potential data loss or duplication. Adding or (un)archiving files during the move may result in duplicate folders and files with different identifiers. If this happens, you will need to manually delete the duplicate files and move the files which were skipped during the initial move.

When you move data from one location to another, you should not change the source data while the Move job is in progress. This will result in jobs getting aborted. Please expand the "Troubleshooting" section below for information on how to fix this if it occurs.

Troubleshooting
  1. If the source or destination of data being moved is modified, the Move jobs will detect the changes and abort the job.

  2. Modifying data at either the source or destination during a Move process can result in incomplete data transfer. Users can still manually move any remaining data afterward.

There are a number of rights and restrictions related to data move as this will delete the data in the source location.

Move Data Rights
Source Project
Destination Project

Within a project

  • Contributor rights

  • Contributor rights

Between different projects

  • Download rights

  • Contributor rights

  • Upload rights

  • Viewer rights

Move Data Restrictions
Source Project
Destination Project

Within a project

  • No linked data

  • No partial data

  • No archived data

  • No Linked data

Between different projects

  • Data sharing enabled

  • Data owned by user's tenant

  • No linked data

  • No partial data

  • No archived data

  • No externally managed projects

  • Within the same region

  • No linked data

  • Within same region

Move jobs will fail if any data being moved is in the "Partial" or "Archived" state.

Move Data From

Move Data From is used when you are in the destination location.

  1. Navigate to Projects > your_project > Data > your_destination_location > Manage > Move From.

  2. Select the files and folders which you want to move.

  3. Select the Move button. Moving large amounts of data can take considerable time. You can monitor the progress at Projects > your_project > Activity > Batch Jobs.

Move Data To

Move Data To is used when you are in the source location. You will need to select the data you want to move from to current location and the destination to move it to.

  1. Navigate to Projects > your_project > Data > your_source_location.

  2. Select the files and folders which you want to move.

  3. Select to Projects > your_project > Data > your_source_location > Manage > Move To.

  4. Select your target project and location.

  5. Select the Move button. Moving large amounts of data can take considerable time. You can monitor the progress at Projects > your_project > Activity > Batch Jobs.

Move Status

  • INITIALIZED

  • WAITING_FOR_RESOURCES

  • RUNNING

  • STOPPED - When choosing to stop the batch job.

  • SUCCEEDED - All files and folders are moved.

  • PARTIALLY_SUCCEEDED - Some files and folders could be moved, but not all. Partially succeeded will typically occur when files were being modified or unavailable while the move process was running.

  • FAILED - None of the files and folders could be moved.

To see the ongoing actions on data in the data overview (Projects > your_project > Data), you can add the ongoing actions column from the column list with the three column symbol at the top right, next to the filter funnel. You can also consult the data detail view for ongoing actions by clicking on the data in the overview.

Restrictions:

  • A total maximum of 1000 items can be moved in one operation. An item can be either a file or a folder. Folders with subfolders and subfiles still count as one item.

  • You can not move files and folders to a destination where one or more files or folders with the same name already exists.

  • You can not move data and folders to linked data.

  • You can not move a folder to itself.

  • You can not move data which is in the process of being moved.

  • You can not move data across regions.

  • You can not move data from externally-managed projects.

  • You can not move linked data.

  • You can not move data between regions.

  • You can not move externally managed data.

  • You can only move data when it has status available.

  • To move data across projects, it must be owned by the user's tenant.

  • If you do not select a target folder for Move Data To, the root folder of the target project is used.

If you are only able to select your source project as the target data project, this may indicate that data sharing (Projects > your_project > Project Settings > Details > Data Sharing) is not enabled for your project or that you do not have have upload rights in other projects.


Download Data

Single files can be downloaded directly from within the UI.

  • Select the checkbox next to the file which you want to download, followed by Download > Select Browser Download > Download.

  • You can also download files from their details screen. Click on the file name and select Download at the bottom of the screen. Depending on the size of your file, it may take some time to load the file contents.

Schedule for Download

You can trigger an asynchronous download via service connector using the Schedule for Download button with one or more files selected.

  1. Select a file or files to download.

  2. Select Download > Schedule download (for files or folders). This will display a list of all available connectors.

  3. Select a connector and optionally, enter your email address if you want to be notified of download completion, and then select Download.

If you do not have a connector, you can click the Don't have a connector yet? option to create a new connector. You must then install this new connector and return to the file selection in step 1 to use it.

You can view the progress of the download or stop the download on the Activity page for the project.


Export Project Data Information

The data records contained in a project can be exported in CSV, JSON, and excel format.

  1. Select one or more files to export.

  2. Select Export.

  3. Select the following export options:

    • To export only the selected file, select the Selected rows as the Rows to export option. To export all files on the page, select Current page.

    • To export only the columns present for the file, select the Visible columns as the Columns to export option.

  4. Select the export format.


Archiving and Deleting files

To manually archive or delete files, do as follows:

  1. Select the checkbox next to the file or files to delete or archive.

  2. Select Manage, and then select one of the following options:

    • Archive — Move the file or files to long-term storage (event code ICA_DATA_110).

    • Unarchive — Return the file or files from long-term storage. Unarchiving can take up to 48 hours, regardless of file size. Unarchived files can be used in analysis (event code ICA_DATA_114).

    • Delete — Remove the file completely (event code ICA_DATA_106).

When attempting concurrent archiving or unarchiving of the same file, a message will inform you to wait for the currently running (un)archiving to finish first.

To archive or delete files programmatically, you can use ICA's API endpoints:

  1. Modify the dates of the file to be deleted/archived.

Python Example

The Python snippet below exemplifies the approach: it sets (or updates if set already) the time to be archived for a specific file:

import requests
import json

from config import PROJECT_ID, DATA_ID, API_KEY

url_get="https://ica.illumina.com/ica/rest/api/projects/" + PROJECT_ID + "/data/" + DATA_ID

# set the API get headers
headers = {
            'X-API-Key': API_KEY,
            'accept': 'application/vnd.illumina.v3+json'
            }

# set the API put headers
headers_put = {
            'X-API-Key': API_KEY,
            'accept': 'application/vnd.illumina.v3+json',
            'Content-Type': 'application/vnd.illumina.v3+json'
            }

# Helper function to insert willBeArchivedAt after field named 'region'
def insert_after_region(details_dict, timestamp):
    new_dict = {}
    for k, v in details_dict.items():
        new_dict[k] = v
        if k == 'region':
            new_dict['willBeArchivedAt'] = timestamp
    if 'willBeArchivedAt' in details_dict:
        new_dict['willBeArchivedAt'] = timestamp
    return new_dict

# 1. Make the GET request
response = requests.get(url_get, headers=headers)
response_data = response.json()

# 2. Modify the JSON data
timestamp = "2024-01-26T12:00:04Z"  # Replace with the provided timestamp
response_data['data']['details'] = insert_after_region(response_data['data']['details'], timestamp)

# 3. Make the PUT request
put_response = requests.put(url_get, data=json.dumps(response_data), headers=headers_put)
print(put_response.status_code)

To delete a file at specific timepoint, the key 'willBeDeletedAt' should be added or changed using the API call. If running in the terminal, a successful run will finish with the message ‘200’. In the ICA UI, you can check the details of the file to see the updated values for ‘Time To Be Archived’ (willBeArchivedAt) or ‘Time To Be Deleted’ (willBeDeletedAt), as shown in the screenshot.


Link Project Data

Linking a folder creates a dynamic read-only view of the source data. You can use this to get access to data without running the risk of modifying the source material and to share data between projects. In addition, linking ensures changes to the source data are immediately visible and no additional storage is required.

You can recognise linked data by the green color and see the owning project as part of the details.

Since this is read-only access, you cannot perform actions on linked data that need to write access. Actions like (un)archiving, linking, creating, deleting, adding or moving data and folders, and copying data into the linked data are not possible.

Linking data is only possible from the root folder of your destination project. The action is disabled in project subfolders.

Linking a parent folder after linking a file or subfolder will unlink the file or subfolder and link the parent folder. So root\linked_subfolder will become root\linked_parentfolder\linked_subfolder.

Migrating snapshot linked data. (linked before ICA release v.2.29)

Before ICA version v.2.29, when data was linked, a snapshot was created of the file and folder structure. These links created a read-only view of the data as it was at the time of linking, but did not propagate changes to the file and folder structure. If you want to use the advantages of the new way of linking with dynamic updates, unlink the data and relink it. Since snapshot linking has been deprecated, all new data linking done in ICA v.2.29 or later has dynamic content updates.

Initial linking can take considerable time when there is a large amount of source data. However, once the initial link is made, updates to the source data will be instantaneous.

You can perform analysis on data from other projects by linking data from that project.

  1. Select Projects > your_project > Data > Manage, and then select Link.

  2. To view data by project, select the funnel symbol, and then select Owning Project. If you only know which project the data is linked to, you can choose to filter on linked projects.

  3. Select the checkbox next to the file or files to add.

  4. Select Select Data.

Your files are added to the Data page. To view the linked data file, select Add filter, and then select Links.

Display Owning Project

if you have selected multiple owning projects, you can add the owning project column to see which project owns the data.

  1. At the top of the screen, next to the filer icon, select the three columns.

  2. The Add/remove columns tab will appear.

  3. Choose Owning Project (or Linked Projects)

Linking Folders

If you link a folder instead of individual files, a warning is displayed indicating that, depending on the size of the folder, linking may take considerable time. The linking process will run in the background and the progress can be monitored on the Projects > your_project > activity > Batch Jobs screen.

To see more details, double-click the batch job.

To see how many individual files are already linked, double-click the item.


Unlinking Project Data

To unlink the data, go to the root level of your project and select the linked folder or if you have linked individual files separately, then you can select those linked files (limited to 100 at a time) and select Manage > Unlink. As during linking a folder, when unlinking, the progress can be monitored at Projects > your_project > activity > Batch Jobs.


Non-indexed Folders

The GUI considers non-indexed folders as a single object. You can access the contents from a non-indexed folder

  • as Analysis input/output

  • in Bench

  • via the API

Action
Allowed
Details

Creation

Yes

Deletion

Yes

You can delete non-indexed folders by selecting them at Projects > your_project > Data > select the folder > Manage > Delete. or with the /api​/projects​/{projectId}​/data/{dataId}:delete endpoint

Uploading Data

API Bench Analysis

Use non-indexed folders as normal folders for Analysis runs and bench. Different methods are available with the API such as creating temporary credentials to upload data to S3 or using /api/projects/{projectId}/data:createFileWithUploadUrl

Downloading Data

Yes

Use non-indexed folders as normal folders for Analysis runs and bench. Use temporary credentials to list and download data with the API.

Analysis Input/Output

Yes

Non-indexed files can be used as input for an analysis and the non-indexed folder can be used as output location. You will not be able to view the contents of the input and output in the analysis details screen.

Bench

Yes

Non-indexed folders can be used in Bench and the output from Bench can be written to non-indexed folders. Non-indexed folders are accessible across Bench workspaces within a project.

Viewing

No

The folder is a single object, you can not view the contents.

Linking

Yes

You cannot see non-indexed folder contents.

Copying

No

Prohibited to prevent storage issues.

Moving

No

Prohibited to prevent storage issues.

Managing tags

No

You cannot see non-indexed folder contents.

Managing format

No

You cannot see non-indexed folder contents.

Use as Reference Data

No

You cannot see non-indexed folder contents.

Data privacy should be carefully considered when adding data in ICA, either through storage configurations (ie, AWS S3) or ICA data upload. Be aware that when adding data from cloud storage providers by creating a storage configuration, ICA will provide access to the data. Ensure the storage configuration source settings are correct and ensure uploads do not include unintended data in order to avoid unintentional privacy breaches. More guidance can be found in the .

See

Uploads via the UI are limited to 5TB and no more than 100 concurrent files at a time, but for practical and performance reasons, it is recommended to use the CLI or when uploading large amounts of data.

For instructions on uploading/downloading data via CLI, see .

Copying data from your own S3 storage requires additional configuration. See and ..

This partial move may cause data at the destination to become unsynchronized between the object store (S3) and ICA. To resolve this, users can create a folder session on the parent folder of the destination directory by following the steps in the API: and then . Ensure the Move job is already aborted before submitting the folder session create and complete requests. Wait for the session status t

Note: You can create a new folder to move data to by filling in the "New folder name (optional)" field. This does NOT rename an existing folder. To rename an existing folder, please see .

the file's information.

the updated information back in ICA.

Non-indexed folders () are designed for optimal performance in situations where no file actions are needed. They serve as fast storage in situations like temporary analysis file storage where you don't need access or searches via the GUI to individual files or subfolders within the folder. Think of a non-indexed folder as a data container. You can access the container which contains all the data, but you can not access the individual data files within the container from the GUI. As non-indexed folders contain data, they count towards your total project storage.

You can create non-indexed folders at Projects > your_project > Data > Manage > Create non-indexed folder. or with the /api​/projects​/{projectId}​/data:createNonIndexedFolder

Filtering

To add filters, select the funnel/filter symbol at the top right, next to the search field.

Filters are reset when you exit the current screen.

Sorting

To sort data, select the three vertical dots in the column header on which you want to sort and chose ascending or descending.

Sorting is retained when you exit the current screen.

Displaying Columns

To change which columns are displayed, select the three columns symbol and select which columns should be shown.

You can keep track of which files are externally controlled and which are ICA-managed by means of the “managed by” column.

The displayed columns are retained when you exit the current screen.

Replace

Overwrites the existing data. Folders will copy their data in an existing folder with existing files. Existing files will be replaced when a file with the same name is copied and new files will be added. The remaining files in the target folder will remain unchanged.

Don't copy

The original files are kept. If you selected a folder, files that do not yet exist in the destination folder are added to it. Files that already exist at the destination are not copied over and the originals are kept.

Keep both

Files have a number appended to them if they already exist. If you copy folders, the folders are merged, with new files added to the destination folder and original files kept. New files with the same name get copied over into the folder with a number appended.

ICA Security and Compliance section
Data Integrity
Service connector
CLI Data Transfer
Connect AWS S3 Bucket
SSE-KMS Encryption
Create Folder Session
Complete Folder Session
GET
PUT
AWS S3 documentation
Data Formats
Move Data
File/Folder Naming
endpoint
data-0
Owning Project Filter