Imports data from Snowflake. If you have a Snowflake database configured as your client_cloud
you should probably use the do
task.
Before you can download data from Snowflake you have to give acces.
extract:
query: SELECT 1 AS int_field, '{{ task.run_id }}' AS string_field, CURRENT_TIMESTAMP() AS timestamp_field
db_conn_id: snowflake
storage_conn_id: aws_s3
database: PRODUCTION
schema: OSB_RAW
bucket: data-exchange-bucket
folder: some-folder-that-holds-the-data
property | type | mandatory | description |
---|---|---|---|
conn_id | string | no | Default is snowflake . Connection string as handed to you by the Onesecondbefore team. |
query | string | no | Use either query or template (below). Query to be executed, whose results will be uploaded to the destination. |
template | string | no | Use either query(above) or template. Contains a link to a file in the `includes` folder in your repository that contains the SQL statement. This query will be executed and the results will be uploaded to the destination. |
db_conn_id | string | no | Connection to use to connect to the BigQuery database |
storage_conn_id | string | no | Connection to use to connect to blob storage. |
database | string | no | Name of the database of the table to extract to blob storage. |
schema | string | no | Name of the schema in which the temporary table will be created. The table id will be _tmp_{{ task.run_id }} |
bucket | string | no | Bucket to which the data will be extracted from the temporary table. |
folder | string | no | Folder to which the data will be extracted from the temporary table. |
item | description |
---|---|
Pre-formatted schema | No. Use config schema if you want to add descriptions. |