BigQueryCreateEmptyTableOperator

Google

Creates a new, empty table in the specified BigQuery dataset, optionally with schema.

View on GitHub

Last Updated: Mar. 16, 2023

Access Instructions

Install the Google provider package into your Airflow environment.

Import the module into your DAG file and instantiate it with your desired params.

Parameters

project_idThe project to create the table into. (templated)
dataset_idRequiredThe dataset to create the table into. (templated)
table_idRequiredThe Name of the table to be created. (templated)
table_resourceTable resource as described in documentation: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#Table If provided all other parameters are ignored. (templated)
schema_fieldsIf set, the schema field list as defined here: https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.schema Example: schema_fields=[{"name": "emp_name", "type": "STRING", "mode": "REQUIRED"}, {"name": "salary", "type": "INTEGER", "mode": "NULLABLE"}]
gcs_schema_objectFull path to the JSON file containing schema (templated). For example: gs://test-bucket/dir1/dir2/employee_schema.json
time_partitioningconfigure optional time partitioning fields i.e. partition by field, type and expiration as per API specifications. See also https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#timePartitioning
gcp_conn_id[Optional] The connection ID used to connect to Google Cloud and interact with the Bigquery service.
google_cloud_storage_conn_id[Optional] The connection ID used to connect to Google Cloud. and interact with the Google Cloud Storage service.
delegate_toThe account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.
labelsa dictionary containing labels for the table, passed to BigQuery Example (with schema JSON in GCS): CreateTable = BigQueryCreateEmptyTableOperator( task_id='BigQueryCreateEmptyTableOperator_task', dataset_id='ODS', table_id='Employees', project_id='internal-gcp-project', gcs_schema_object='gs://schema-bucket/employee_schema.json', gcp_conn_id='airflow-conn-id', google_cloud_storage_conn_id='airflow-conn-id' ) Corresponding Schema file (employee_schema.json): [ { "mode": "NULLABLE", "name": "emp_name", "type": "STRING" }, { "mode": "REQUIRED", "name": "salary", "type": "INTEGER" } ] Example (with schema in the DAG): CreateTable = BigQueryCreateEmptyTableOperator( task_id='BigQueryCreateEmptyTableOperator_task', dataset_id='ODS', table_id='Employees', project_id='internal-gcp-project', schema_fields=[{"name": "emp_name", "type": "STRING", "mode": "REQUIRED"}, {"name": "salary", "type": "INTEGER", "mode": "NULLABLE"}], gcp_conn_id='airflow-conn-id-account', google_cloud_storage_conn_id='airflow-conn-id' )
view[Optional] A dictionary containing definition for the view. If set, it will create a view instead of a table: See also https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#ViewDefinition
materialized_view[Optional] The materialized view definition.
encryption_configuration[Optional] Custom encryption configuration (e.g., Cloud KMS keys). Example: encryption_configuration = { "kmsKeyName": "projects/testp/locations/us/keyRings/test-kr/cryptoKeys/test-key" }
locationThe location used for the operation.
cluster_fields[Optional] The fields used for clustering. BigQuery supports clustering for both partitioned and non-partitioned tables. See also https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#clustering.fields
impersonation_chainOptional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated).
if_existsWhat should Airflow do if the table exists. If set to log, the TI will be passed to success and an error message will be logged. Set to ignore to ignore the error, set to fail to fail the TI, and set to skip to skip it.
exists_okDeprecated - use if_exists=”ignore” instead.

Documentation

Creates a new, empty table in the specified BigQuery dataset, optionally with schema.

The schema to be used for the BigQuery table may be specified in one of two ways. You may either directly pass the schema fields in, or you may point the operator to a Google Cloud Storage object name. The object in Google Cloud Storage must be a JSON file with the schema fields in it. You can also create a table without schema.

See also

For more information on how to use this operator, take a look at the guide: Create native table

Was this page helpful?