Setup the Matatika platform to deliver and process your data in Amazon Redshift in minutes.
Amazon Redshift is a cloud-based data warehousing service.
Amazon Redshift allows businesses to store and analyze large amounts of data in a cost-effective and scalable way. It can handle petabyte-scale data warehouses and offers fast query performance using SQL. It also integrates with other AWS services such as S3, EMR, and Kinesis. With Redshift, businesses can easily manage their data and gain insights to make informed decisions.
If set to false, the tap will ignore activate version messages. If set to true, add_record_metadata must be set to true as well.
Note that this must be enabled for activate_version to work!This adds _sdc_extracted_at, _sdc_batched_at, and more to every table. See https://sdk.meltano.com/en/latest/implementation/record_metadata.html for more information.
Redshift copy role arn to use for the COPY command from s3
Maximum number of rows in each batch.
Redshift cluster identifier. Note if sqlalchemy_url is set or enable_iam_authentication is false this will be ignored.
Database name. Note if sqlalchemy_url is set this will be ignored.
Redshift schema to send data to, example: tap-clickup
If true, use temporary credentials (https://docs.aws.amazon.com/redshift/latest/mgmt/generating-iam-credentials-cli-api.html).
One or more LCID locale strings to produce localized output for: https://faker.readthedocs.io/en/master/#localization
Value to seed the Faker generator for deterministic output: https://faker.readthedocs.io/en/master/#seeding-the-generator
'True' to enable schema flattening and automatically expand nested properties.
The max depth to flatten schemas.
When activate version is sent from a tap this specefies if we should delete the records that don't match, or mark them with a date in the _sdc_deleted_at
column. This config option is ignored if activate_version
is set to false.
Hostname for redshift instance.
The method to use when loading data into the destination. append-only
will always write all input records whether that records already exists or not. upsert
will update existing records and insert new records. overwrite
will delete all existing records and insert all input records.
Password used to authenticate. Note if sqlalchemy_url is set this will be ignored.
The port on which redshift is awaiting connection.
ACTIVATE_VERSION
messagesWhether to process ACTIVATE_VERSION
messages.
If you want to remove staging files in S3
S3 bucket to save staging files before using COPY command
S3 key prefix to save staging files before using COPY command
AWS region for S3 bucket. If not specified, region will be detected by boto config resolution. See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html
Whether or not to use ssl to verify the server's identity. Use ssl_certificate_authority and ssl_mode for further customization. To use a client certificate to authenticate yourself to the server, use ssl_client_certificate_enable instead.
SSL Protection method, see [redshift documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html for more information. Must be one of disable, allow, prefer, require, verify-ca, or verify-full.
User-defined config values to be used within map expressions.
Config object for stream maps capability. For more information check out Stream Maps.
Where you want to store your temp data files.
User name used to authenticate. Note if sqlalchemy_url is set this will be ignored.
Whether to validate the schema of the incoming streams.
Collect and process data from 100s of sources and tools with Amazon Redshift.