Snowflake Connect

Snowflake data store setup in minutes

Setup the Matatika platform to deliver and process your data in Snowflake in minutes.

Automate Snowflake from a single space with no code

Snowflake is a cloud-based data warehousing platform.

Snowflake is a fully-managed service that allows businesses to store, process, and analyze large amounts of structured and semi-structured data using cloud-based infrastructure. It provides a scalable and secure solution for data warehousing, data lakes, data engineering, data science, and data sharing. Snowflake's unique architecture separates compute and storage, allowing users to scale each independently and pay only for what they use. It also offers features such as automatic scaling, zero-copy cloning, and instant elasticity, making it easy for businesses to manage their data and derive insights from it. With Snowflake, users can query data using SQL, integrate with popular BI and ETL tools, and collaborate with others through secure data sharing.

Prerequisites

To obtain the required settings for connecting to Snowflake:

  1. Account: This is typically provided by your Snowflake administrator or can be found in the Snowflake web interface. It is the unique identifier for your Snowflake account.

  2. Username: This is the username you use to log in to Snowflake.

  3. Password or Private Key: The password method is being phased out for non-human connections.

  4. Role: This is the role you want to use when connecting to Snowflake.

  5. Database: This is the name of the database you want to connect to in Snowflake.

  6. Warehouse: This is the name of the warehouse, i.e. the resource processing profile, you want to use when connecting to Snowflake.

  7. Schema: This is the name of the schema you want to use when connecting to Snowflake.

Settings

Account

Your account identifier. See Account Identifiers.

Database

The initial database for the Snowflake session.

Schema

The initial schema for the Snowflake session.

Default Target Schema

The default target database schema name to use for all streams.

User

The login name for your Snowflake user.

Private Key

Base64 encoded private key contents for KeyPair authentication.

Private Key Passphrase

Passphrase to decrypt private key if encrypted.

Warehouse

The initial warehouse for the session.

Role

The initial role for the session.

Add Record Metadata

Whether to add metadata columns.

Batch Size Rows

Maximum number of rows in each batch.

Clean Up Batch Files

Whether to remove batch files after processing.

Faker Config Locale

One or more LCID locale strings to produce localized output for: https://faker.readthedocs.io/en/master/#localization

Faker Config Seed

Value to seed the Faker generator for deterministic output: https://faker.readthedocs.io/en/master/#seeding-the-generator

Flattening Enabled

'True' to enable schema flattening and automatically expand nested properties.

Flattening Max Depth

The max depth to flatten schemas.

Hard Delete

Hard delete records.

Load Method

The method to use when loading data into the destination. append-only will always write all input records whether that records already exists or not. upsert will update existing records and insert new records. overwrite will delete all existing records and insert all input records.

Stream Map Config

User-defined config values to be used within map expressions.

Stream Maps

Config object for stream maps capability. For more information check out Stream Maps.

Use Browser Authentication

Whether to use SSO authentication using an external browser.

Validate Records

Whether to validate the schema of the incoming streams.


View source code

Snowflake

Collect and process data from 100s of sources and tools with Snowflake.

Interested in learning more?

Get in touch