MLFlow Multi-Model Config Example

Airflow can integrate with tools like MLFlow to streamline the model experimentation process. By using the automation and orchestration of Airflow together with MLflow's core concepts Data Scientists can standardize, share, and iterate over experiments more easily.

AI + Machine LearningData Science


Providers:

Modules:

Last Updated: Feb. 8, 2022

Run this DAG

1. Install the Astronomer CLI:Skip if you already have our CLI

2. Download the repository:

3. Navigate to where the repository was cloned and start the DAG:

Example DAGs for Data Science and Machine Learning Use Cases

These examples are meant to be a guide/skaffold for Data Science and Machine Learning pipelines that can be implemented in Airflow with MLflow integration.

In an effort to keep the examples easy to follow, much of the data processing and modeling code has intentionally been kept simple.

Examples

  1. mlflow-dag.py - A simple DS pipeline from data extraction to modeling.

    • Pulls data from BigQuery using the Google Provider (BigQueryHook) into a dataframe that preps, trains, and builds the model
    • Data is passed between the tasks using XComs
    • Uses GCS as an Xcom backend to easily track intermediary data in a scalable, external system
    • Trains model with Grid Search
    • Logs model metrics to MLflow.
  2. mlflow-multimodel-dag.py - A simple DS pipeline from data extraction to modeling that leverages the Task Goup API to experiment with multiple models in parallel.

    • This DAG performs the same tasks as example #1 with some additions.
    • Uses Task Groups to configure training multiple models with Grid Search in parallel.
  3. mlflow-multimodel-config-dag.py - A simple DS pipeline from data extraction to modeling that leverages the Task Goup API to experiment with multiple models in parallel.

    • This DAG performs the same tasks as example #2 with the addition of passing optional grid parameters at runtime to the DAG for various models.
  4. mlflow-multimodel-register-dag.py - A simple DS pipeline from data extraction to modeling publication that leverages the Task Goup API to experiment with multiple models in parallel.

    • This DAG performs the same tasks as example #2 with some additions.

    • Selects the best performing model and parameters then fits a final model on the full dataset for publication to the MLflow Model Registry.

    • Sample runtime configs to pass that will override default parameters provided in models.py.

      {
      "lgbm":{
      "learning_rate": [0.01, 0.05, 0.1],
      "n_estimators": [50, 100],
      "num_leaves": [31, 40],
      "max_depth": [16, 24, 31]
      },
      "log_reg":{
      "penalty": ["l1","l2","elasticnet"],
      "C": [0.001, 0.01, 0.1, 1, 10],
      "solver": ["newton-cg", "lbfgs", "liblinear"]
      }
      }

Sample MLFlow Outputs

Runs

Plots

Metrics

Parameters