Collect BigQuery data into your data warehouse or ours. The Matatika pipelines will take care of the data collection and preparation for your analytics and BI tools.
BigQuery data warehouse extractor
Compression format to use for batch files.
Format to use for batch files.
Prefix to use when writing batch files.
Root path to use when writing batch files.
JSON content or path to service account credentials.
An optional Google Storage Bucket, when supplied a file based extract will be used.
If an array of schema names is provided, the tap will only process the specified BigQuery schemas (datasets) and ignore others. If left blank, the tap automatically determines ALL available schemas.
If an array of table names is provided, the tap will only process the specified BigQuery tables and ignore others. If left blank, the tap automatically determines ALL available tables. Shell patterns are supported.
'True' to enable schema flattening and automatically expand nested properties.
The max depth to flatten schemas.
GCP Project
User-defined config values to be used within map expressions.
Config object for stream maps capability. For more information check out Stream Maps.
Extract, Transform, and Load BigQuery data into your data warehouse or ours.