The target-bigquery Meltano loader pulls data from BigQuery that can then be sent to a destination using a loader.
Other Available Variants
- adswerve (default)
- transferwise
Getting Started
Prerequisites
If you haven't already, follow the initial steps of the Getting Started guide:
Then, follow the steps in the "Activate the Google BigQuery API" section of the repository's README.
Installation and configuration
-
Add the target-bigquery loader to your project
using
:meltano add
-
Configure the target-bigquery settings using
:meltano config
meltano add loader target-bigquery
meltano config target-bigquery set --interactive
Next steps
Follow the remaining steps of the Getting Started guide:
If you run into any issues, learn how to get help.
Capabilities
The current capabilities fortarget-bigquery
may have been automatically set when originally added to the Hub. Please review the
capabilities when using this loader. If you find they are out of date, please
consider updating them by making a pull request to the YAML file that defines the
capabilities for this loader.This plugin has the following capabilities:
You can
override these capabilities or specify additional ones
in your meltano.yml
by adding the capabilities
key.
Settings
The
target-bigquery
settings that are known to Meltano are documented below. To quickly
find the setting you're looking for, click on any setting name from the list:
project_id
dataset_id
location
credentials_path
validate_records
add_metadata_columns
replication_method
table_prefix
table_suffix
max_cache
merge_state_messages
table_config
You can
override these settings or specify additional ones
in your meltano.yml
by adding the settings
key.
Please consider adding any settings you have defined locally to this definition on MeltanoHub by making a pull request to the YAML file that defines the settings for this plugin.
Project ID (project_id)
-
Environment variable:
TARGET_BIGQUERY_PROJECT_ID
BigQuery project
Dataset ID (dataset_id)
-
Environment variable:
TARGET_BIGQUERY_DATASET_ID
-
Default Value:
$MELTANO_EXTRACT__LOAD_SCHEMA
BigQuery dataset.
The default value will expand to the value of the load_schema
extra for the extractor used in the pipeline, which defaults to the extractor's namespace, e.g. tap_gitlab
for tap-gitlab
.
Location (location)
-
Environment variable:
TARGET_BIGQUERY_LOCATION
-
Default Value:
US
Dataset Location. See https://cloud.google.com/bigquery/docs/locations.
Credentials Path (credentials_path)
-
Environment variable:
TARGET_BIGQUERY_CREDENTIALS_PATH
-
Default Value:
$MELTANO_PROJECT_ROOT/client_secrets.json
Fully qualified path to client_secrets.json
for your service account.
See the "Activate the Google BigQuery API" section of the repository's README and https://cloud.google.com/docs/authentication/production.
By default, this file is expected to be at the root of your project directory.
Validate Records (validate_records)
-
Environment variable:
TARGET_BIGQUERY_VALIDATE_RECORDS
-
Default Value:
false
Add Metadata Columns (add_metadata_columns)
-
Environment variable:
TARGET_BIGQUERY_ADD_METADATA_COLUMNS
-
Default Value:
false
Add _time_extracted
and _time_loaded
metadata columns
Replication Method (replication_method)
-
Environment variable:
TARGET_BIGQUERY_REPLICATION_METHOD
-
Default Value:
append
The Replication method can be:
append
: Adding new rows to the table (Default value)
truncate
: Deleting all previous rows and uploading the new ones to the table
incremental
: Upserting new rows into the table, using the primary key given by the tap connector (if it finds an old row with same key, updates it. Otherwise it inserts the new row)
WARNING: It is not recommended to use the incremental
option (which uses MERGE SQL statement). It might result in loss of production data, because historical records get updated. Instead, we recommend using the append
replication method, which will preserve historical data.
Table Prefix (table_prefix)
-
Environment variable:
TARGET_BIGQUERY_TABLE_PREFIX
Add prefix to table name
Table Suffix (table_suffix)
-
Environment variable:
TARGET_BIGQUERY_TABLE_SUFFIX
Add suffix to table name
Max Cache (max_cache)
-
Environment variable:
TARGET_BIGQUERY_MAX_CACHE
-
Default Value:
50
Maximum cache size in MB
Merge State Messages (merge_state_messages)
-
Environment variable:
TARGET_BIGQUERY_MERGE_STATE_MESSAGES
-
Default Value:
false
Whether to merge multiple state messages from the tap into the state file or uses the last state message as the state file. Note that it is not recommended to set this to true when using with Meltano as the merge behavior conflicts with Meltano’s merge process.
Table Config (table_config)
-
Environment variable:
TARGET_BIGQUERY_TABLE_CONFIG
A path to a file containing the definition of partitioning and clustering.
Something missing?
This page is generated from a YAML file that you can contribute changes to.
Edit it on GitHub!Looking for help?
#plugins-general
channel.
Install
meltano add loader target-bigquery
Maintenance Status
Meltano Stats
Keywords