BigQueryCreateExternalTableOperator

Google

Creates a new external table in the dataset with the data from Google Cloud Storage.

View on GitHub

Last Updated: Mar. 16, 2023

Access Instructions

Install the Google provider package into your Airflow environment.

Import the module into your DAG file and instantiate it with your desired params.

Parameters

bucketThe bucket to point the external table to. (templated)
source_objectsList of Google Cloud Storage URIs to point table to. If source_format is ‘DATASTORE_BACKUP’, the list must only contain a single URI.
destination_project_dataset_tableThe dotted (.). BigQuery table to load data into (templated). If is not included, project will be the project defined in the connection json.
schema_fieldsIf set, the schema field list as defined here: https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.schema Example: schema_fields=[{"name": "emp_name", "type": "STRING", "mode": "REQUIRED"}, {"name": "salary", "type": "INTEGER", "mode": "NULLABLE"}] Should not be set when source_format is ‘DATASTORE_BACKUP’.
table_resourceTable resource as described in documentation: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#Table If provided all other parameters are ignored. External schema from object will be resolved.
schema_objectIf set, a GCS object path pointing to a .json file that contains the schema for the table. (templated)
source_formatFile format of the data.
autodetectTry to detect schema and format options automatically. The schema_fields and schema_object options will be honored when specified explicitly. https://cloud.google.com/bigquery/docs/schema-detect#schema_auto-detection_for_external_data_sources
compression[Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
skip_leading_rowsNumber of rows to skip when loading from a CSV.
field_delimiterThe delimiter to use for the CSV.
max_bad_recordsThe maximum number of bad records that BigQuery can ignore when running the job.
quote_characterThe value that is used to quote data sections in a CSV file.
allow_quoted_newlinesWhether to allow quoted newlines (true) or not (false).
allow_jagged_rowsAccept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. Only applicable to CSV, ignored for other formats.
gcp_conn_id(Optional) The connection ID used to connect to Google Cloud and interact with the Bigquery service.
google_cloud_storage_conn_id(Optional) The connection ID used to connect to Google Cloud and interact with the Google Cloud Storage service.
delegate_toThe account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.
src_fmt_configsconfigure optional fields specific to the source format
labelsa dictionary containing labels for the table, passed to BigQuery
encryption_configuration[Optional] Custom encryption configuration (e.g., Cloud KMS keys). Example: encryption_configuration = { "kmsKeyName": "projects/testp/locations/us/keyRings/test-kr/cryptoKeys/test-key" }
locationThe location used for the operation.
impersonation_chainOptional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated).

Documentation

Creates a new external table in the dataset with the data from Google Cloud Storage.

The schema to be used for the BigQuery table may be specified in one of two ways. You may either directly pass the schema fields in, or you may point the operator to a Google Cloud Storage object name. The object in Google Cloud Storage must be a JSON file with the schema fields in it.

See also

For more information on how to use this operator, take a look at the guide: Create external table

Was this page helpful?