LogoLogo
Illumina Connected Software
  • Introduction
  • Get Started
    • About the Platform
    • Get Started
  • Home
    • Projects
    • Bundles
    • Event Log
    • Metadata Models
    • Docker Repository
    • Tool Repository
    • Storage
      • Connect AWS S3 Bucket
        • SSE-KMS Encryption
  • Project
    • Data
      • Data Integrity
    • Samples
    • Activity
    • Flow
      • Reference Data
      • Pipelines
        • Nextflow
        • CWL
        • XML Input Form
        • 🆕JSON-Based input forms
          • InputForm.json Syntax
          • JSON Scatter Gather Pipeline
        • Tips and Tricks
      • Analyses
    • Base
      • Tables
        • Data Catalogue
      • Query
      • Schedule
      • Snowflake
    • Bench
      • Workspaces
      • JupyterLab
      • 🆕Bring Your Own Bench Image
      • 🆕Bench Command Line Interface
      • 🆕Pipeline Development in Bench (Experimental)
        • Creating a Pipeline from Scratch
        • nf-core Pipelines
        • Updating an Existing Flow Pipeline
      • 🆕Containers in Bench
      • FUSE Driver
    • Cohorts
      • Create a Cohort
      • Import New Samples
      • Prepare Metadata Sheets
      • Precomputed GWAS and PheWAS
      • Cohort Analysis
      • Compare Cohorts
      • Cohorts Data in ICA Base
      • Oncology Walk-through
      • Rare Genetic Disorders Walk-through
      • Public Data Sets
    • Details
    • Team
    • Connectivity
      • Service Connector
      • Project Connector
    • Notifications
  • Command-Line Interface
    • Installation
    • Authentication
    • Data Transfer
    • Config Settings
    • Output Format
    • Command Index
    • Releases
  • Sequencer Integration
    • Cloud Analysis Auto-launch
  • Tutorials
    • Nextflow Pipeline
      • Nextflow DRAGEN Pipeline
      • Nextflow: Scatter-gather Method
      • Nextflow: Pipeline Lift
        • Nextflow: Pipeline Lift: RNASeq
      • Nextflow CLI Workflow
    • CWL CLI Workflow
      • CWL Graphical Pipeline
      • CWL DRAGEN Pipeline
      • CWL: Scatter-gather Method
    • Base Basics
      • Base: SnowSQL
      • Base: Access Tables via Python
    • Bench ICA Python Library
    • API Beginner Guide
    • Launch Pipelines on CLI
      • Mount projectdata using CLI
    • Data Transfer Options
    • Pipeline Chaining on AWS
    • End-to-End User Flow: DRAGEN Analysis
  • Reference
    • Software Release Notes
      • 2025
      • 2024
      • 2023
      • 2022
      • 2021
    • Document Revision History
      • 2025
      • 2024
      • 2023
      • 2022
    • Known Issues
    • API
    • Pricing
    • Security and Compliance
    • Network Settings
    • ICA Terminology
    • Resources
    • Data Formats
    • FAQ
Powered by GitBook
On this page
  • Obtaining OAuth token and URL
  • Example:
  • Example Queries:

Was this helpful?

Export as PDF
  1. Tutorials
  2. Base Basics

Base: SnowSQL

PreviousBase BasicsNextBase: Access Tables via Python

Last updated 7 months ago

Was this helpful?

You can access the databases and tables within the Base module using snowSQL command-line interface. This is useful for external collaborators who do not have access to ICA core functionalities. In this tutorial we will describe how to obtain the token and use it for accessing the Base module. This tutorial does not cover how to install and configure snowSQL.

Obtaining OAuth token and URL

Once the Base module has been enabled within a project, the following details are shown in Projects > your_project > Project Settings > Details.

After clicking the button Create OAuth access token, the pop-up authenticator is displayed.

After clicking the button Generate snowSQL command the pop-up authenticator presents the snowSQL command.

Copy the snowSQL command and run it in the console to log in.

You can also get the OAuth access token via API by providing <PROJECT ID> and <YOUR KEY>.

Example:

API Call:

curl -X 'POST' \
  'https://ica.illumina.com/ica/rest/api/projects/<PROJECT ID>/base:connectionDetails' \
  -H 'accept: application/vnd.illumina.v3+json' \
  -H 'X-API-Key: <YOUR KEY>' 

Response

{
  "authenticator": "oauth",
  "accessToken": "XXXXXXXXXX",
  "dnsName": "use1sf01.us-east-1.snowflakecomputing.com",
  "userPrincipalName": "xxxxx",
  "databaseName": "xxxxx",
  "schemaName": "xxx",
  "warehouseName": "xxxxxx",
  "roleName": "xxx"
}

Template snowSQL:

snowsql -a use1sf01.us-east-1 -u <userPrincipalName> --authenticator=oauth -r <roleName> -d <databaseName> -s PUBLIC -w <warehouseName> --token="<accessToken>"

Now you can perform a variety of tasks such as:

  1. Querying Data: execute SQL queries against tables, views, and other database objects to retrieve data from the Snowflake data warehouse.

  2. Creating and Managing Database Objects: create tables, views, stored procedures, functions, and other database objects in Snowflake. you can also modify and delete these objects as needed.

  3. Loading Data: load data into Snowflake from various sources such as local files, AWS S3, Azure Blob Storage, or Google Cloud Storage.

Overall, snowSQL CLI provides a powerful and flexible interface to work with Snowflake, allowing external users to manage data warehouse and perform a variety of tasks efficiently and effectively without access to the ICA core.

Example Queries:

Show all tables in the database:

>SHOW TABLES;

Create a new table:

create TABLE demo1(sample_name VARCHAR, count INT);

List records in a table:

SELECT * FROM demo1;

Load data from a file: To load data from a file, you can start by create a staging area in the internal storage using the following commend:

>CREATE STAGE myStage;

You can then upload the local file to the internal storage using the following command:

> PUT file:///path/to/data.tsv @myStage;

You can check if the file was uploaded properly using LIST command:

> LIST @myStage;

Finally, Load data by using COPY TO command. The command assumes the data.tsv is a tab delimited file. You can easily modify the following command to import JSON file setting TYPE=JSON.

> COPY INTO demo1(sample_name, count) FROM @mystage/data.tsv FILE_FORMAT = (TYPE = 'CSV' FIELD_DELIMITER = '\t');

Load data from a string: If you have data as JSON string, you can import the data into the tables using following commands.

> SET myJSON_str = '{"sample_name": "from-json-str", "count": 1}';
> INSERT INTO demo1(sample_name, count)
> SELECT
    PARSE_JSON($myJSON_str):sample_name::STRING,
    PARSE_JSON($myJSON_str):count::INT

Load data into specific columns: If you want to load sample_name into the table, you can remove the "count" from the column and the value list as below:

> SET myJSON_str = '{"sample_name": "from-json-str", "count": 1}';
> INSERT INTO demo1(sample_name)
  SELECT
    PARSE_JSON($myJSON_str):sample_name::STRING;

List the views of the database to which you are connected. As shared database and catalogue views are created within the project database, they will be listed. However, it does not show views which are granted via another database, role or from bundles.

>SHOW VIEW;

Show grants, both directly on the tables and views and grants to roles which in turn have grants on tables and views.

>SHOW GRANTS;
base-enabled-oauth
base-oauth-token
base-oauth-command