DataflowTemplatedJobStartOperator

Google

Start a Templated Cloud Dataflow job. The parameters of the operation will be passed to the job.

View on GitHub

Last Updated: Feb. 25, 2023

Access Instructions

Install the Google provider package into your Airflow environment.

Import the module into your DAG file and instantiate it with your desired params.

Parameters

templateRequiredThe reference to the Dataflow template.
job_nameThe ‘jobName’ to use when executing the Dataflow template (templated).
optionsMap of job runtime environment options. It will update environment argument if passed. See also For more information on possible configurations, look at the API documentation https://cloud.google.com/dataflow/pipelines/specifying-exec-params
dataflow_default_optionsMap of default job environment options.
parametersMap of job specific parameters for the template.
project_idOptional, the Google Cloud project ID in which to start a job. If set to None or missing, the default project_id from the Google Cloud connection is used.
locationJob location.
gcp_conn_idThe connection ID to use connecting to Google Cloud.
delegate_toThe account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.
poll_sleepThe time in seconds to sleep between polling Google Cloud Platform for the dataflow job status while the job is in the JOB_STATE_RUNNING state.
impersonation_chainOptional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated).
environmentOptional, Map of job runtime environment options. See also For more information on possible configurations, look at the API documentation https://cloud.google.com/dataflow/pipelines/specifying-exec-params
cancel_timeoutHow long (in seconds) operator should wait for the pipeline to be successfully cancelled when task is being killed.
append_job_nameTrue if unique suffix has to be appended to job name.
wait_until_finished(Optional) If True, wait for the end of pipeline execution before exiting. If False, only submits job. If None, default behavior. The default behavior depends on the type of pipeline: for the streaming pipeline, wait for jobs to start, for the batch pipeline, wait for the jobs to complete. Warning You cannot call PipelineResult.wait_until_finish method in your pipeline code for the operator to work properly. i. e. you must use asynchronous execution. Otherwise, your pipeline will always wait until finished. For more information, look at: Asynchronous execution The process of starting the Dataflow job in Airflow consists of two steps: running a subprocess and reading the stderr/stderr log for the job id. loop waiting for the end of the job ID from the previous step. This loop checks the status of the job. Step two is started just after step one has finished, so if you have wait_until_finished in your pipeline code, step two will not start until the process stops. When this process stops, steps two will run, but it will only execute one iteration as the job will be in a terminal state. If you in your pipeline do not call the wait_for_pipeline method but pass wait_until_finish=True to the operator, the second loop will wait for the job’s terminal state. If you in your pipeline do not call the wait_for_pipeline method, and pass wait_until_finish=False to the operator, the second loop will check once is job not in terminal state and exit the loop.

Documentation

Start a Templated Cloud Dataflow job. The parameters of the operation will be passed to the job.

See also

For more information on how to use this operator, take a look at the guide: Templated jobs

It’s a good practice to define dataflow_* parameters in the default_args of the dag like the project, zone and staging location.

default_args = {
"dataflow_default_options": {
"zone": "europe-west1-d",
"tempLocation": "gs://my-staging-bucket/staging/",
}
}

You need to pass the path to your dataflow template as a file reference with the template parameter. Use parameters to pass on parameters to your job. Use environment to pass on runtime environment variables to your job.

t1 = DataflowTemplatedJobStartOperator(
task_id="dataflow_example",
template="{{var.value.gcp_dataflow_base}}",
parameters={
"inputFile": "gs://bucket/input/my_input.txt",
"outputFile": "gs://bucket/output/my_output.txt",
},
gcp_conn_id="airflow-conn-id",
dag=my_dag,
)

template, dataflow_default_options, parameters, and job_name are templated, so you can use variables in them.

Note that dataflow_default_options is expected to save high-level options for project information, which apply to all dataflow operators in the DAG.

param deferrable

Run operator in the deferrable mode.

Was this page helpful?