py_fileRequiredReference to the python dataflow pipeline file.py, e.g., /some/local/file/path/to/your/python/pipeline/file. (templated)
job_nameThe ‘job_name’ to use when executing the Dataflow job (templated). This ends up being set in the pipeline options, so any entry with key 'jobName' or 'job_name' in options will be overwritten.
py_optionsAdditional python options, e.g., [“-m”, “-v”].
dataflow_default_optionsMap of default job options.
optionsMap of job specific options.The key must be a dictionary. The value can contain different types: If the value is None, the single option - --key (without value) will be added. If the value is False, this option will be skipped If the value is True, the single option - --key (without value) will be added. If the value is list, the many options will be added for each key. If the value is ['A', 'B'] and the key is key then the --key=A --key=B options will be left Other value types will be replaced with the Python textual representation. When defining labels (labels option), you can also provide a dictionary.
py_interpreterPython version of the beam pipeline. If None, this defaults to the python3. To track python versions supported by beam and related issues check: https://issues.apache.org/jira/browse/BEAM-1251
py_requirementsAdditional python package(s) to install. If a value is passed to this parameter, a new virtual environment has been created with additional packages installed. You could also install the apache_beam package if it is not installed on your system or you want to use a different version.
py_system_site_packagesWhether to include system_site_packages in your virtualenv. See virtualenv documentation for more information. This option is only relevant if the py_requirements parameter is not None.
gcp_conn_idThe connection ID to use connecting to Google Cloud.
project_idOptional, the Google Cloud project ID in which to start a job. If set to None or missing, the default project_id from the Google Cloud connection is used.
locationJob location.
delegate_toThe account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.
poll_sleepThe time in seconds to sleep between polling Google Cloud Platform for the dataflow job status while the job is in the JOB_STATE_RUNNING state.
drain_pipelineOptional, set to True if want to stop streaming job by draining it instead of canceling during killing task instance. See: https://cloud.google.com/dataflow/docs/guides/stopping-a-pipeline
cancel_timeoutHow long (in seconds) operator should wait for the pipeline to be successfully cancelled when task is being killed.
wait_until_finished(Optional) If True, wait for the end of pipeline execution before exiting. If False, only submits job. If None, default behavior. The default behavior depends on the type of pipeline: for the streaming pipeline, wait for jobs to start, for the batch pipeline, wait for the jobs to complete. Warning You cannot call PipelineResult.wait_until_finish method in your pipeline code for the operator to work properly. i. e. you must use asynchronous execution. Otherwise, your pipeline will always wait until finished. For more information, look at: Asynchronous execution The process of starting the Dataflow job in Airflow consists of two steps: running a subprocess and reading the stderr/stderr log for the job id. loop waiting for the end of the job ID from the previous step. This loop checks the status of the job. Step two is started just after step one has finished, so if you have wait_until_finished in your pipeline code, step two will not start until the process stops. When this process stops, steps two will run, but it will only execute one iteration as the job will be in a terminal state. If you in your pipeline do not call the wait_for_pipeline method but pass wait_until_finish=True to the operator, the second loop will wait for the job’s terminal state. If you in your pipeline do not call the wait_for_pipeline method, and pass wait_until_finish=False to the operator, the second loop will check once is job not in terminal state and exit the loop.