BigQueryToGCSOperator

Google

Transfers a BigQuery table to a Google Cloud Storage bucket.

View on GitHub

Last Updated: Mar. 20, 2023

Access Instructions

Install the Google provider package into your Airflow environment.

Import the module into your DAG file and instantiate it with your desired params.

Parameters

source_project_dataset_tableRequiredThe dotted (.|:). BigQuery table to use as the source data. If is not included, project will be the project defined in the connection json. (templated)
destination_cloud_storage_urisRequiredThe destination Google Cloud Storage URI (e.g. gs://some-bucket/some-file.txt). (templated) Follows convention defined here: https://cloud.google.com/bigquery/exporting-data-from-bigquery#exportingmultiple
project_idGoogle Cloud Project where the job is running
compressionType of compression to use.
export_formatFile format to export.
field_delimiterThe delimiter to use when extracting to a CSV.
print_headerWhether to print a header for a CSV file extract.
gcp_conn_id(Optional) The connection ID used to connect to Google Cloud.
delegate_toThe account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.
labelsa dictionary containing labels for the job/query, passed to BigQuery
locationThe location used for the operation.
impersonation_chainOptional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated).
result_retryHow to retry the result call that retrieves rows
result_timeoutThe number of seconds to wait for result method before using result_retry
job_idThe ID of the job. It will be suffixed with hash of job configuration unless force_rerun is True. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters. If not provided then uuid will be generated.
force_rerunIf True then operator will use hash of uuid as job id suffix
reattach_statesSet of BigQuery job’s states in case of which we should reattach to the job. Should be other than final states.
deferrableRun operator in the deferrable mode

Documentation

Transfers a BigQuery table to a Google Cloud Storage bucket.

See also

For more information on how to use this operator, take a look at the guide: Operator

See also

For more details about these parameters: https://cloud.google.com/bigquery/docs/reference/v2/jobs

Was this page helpful?