Skip to content
Sarah Maddox edited this page Nov 6, 2018 · 15 revisions

What is Kubeflow Pipelines?

Kubeflow Pipelines is the platform for building and deploying portable and scalable end-to-end ML workflows, based on containers. The Kubeflow Pipelines platform consists of:

  • User interface for managing and tracking experiments, jobs, and runs
  • Engine for scheduling multi-step ML workflows
  • SDK for defining and manipulating pipelines and components
  • Notebooks for interacting with the system using the SDK

Getting started

Goals of the Kubeflow pipelines service

The Kubeflow pipelines service has the following goals:

  • End to end orchestration: enabling and simplifying the orchestration of end to end machine learning pipelines
  • Easy experimentation: making it easy for you to try numerous ideas and techniques, and manage your various trials/experiments.
  • Easy re-use: enabling you to re-use components and pipelines to quickly cobble together end to end solutions, without having to re-build each time.

The Python code to represent a pipeline workflow graph

@mlp.pipeline(
  name='XGBoost Trainer',
  description='A trainer that does end-to-end distributed training for XGBoost models.'
)
def xgb_train_pipeline(
    output,
    project,
    region=PipelineParam(value='us-central1'),
    train_data=PipelineParam(value='gs://ml-pipeline-playground/sfpd/train.csv'),
    eval_data=PipelineParam(value='gs://ml-pipeline-playground/sfpd/eval.csv'),
    schema=PipelineParam(value='gs://ml-pipeline-playground/sfpd/schema.json'),
    target=PipelineParam(value='resolution'),
    rounds=PipelineParam(value=200),
    workers=PipelineParam(value=2),
):
  delete_cluster_op = DeleteClusterOp('delete-cluster', project, region)
  with mlp.ExitHandler(exit_op=delete_cluster_op):
    create_cluster_op = CreateClusterOp('create-cluster', project, region, output)

    analyze_op = AnalyzeOp('analyze', project, region, create_cluster_op.output, schema,
                           train_data, '%s/{{workflow.name}}/analysis' % output)

    transform_op = TransformOp('transform', project, region, create_cluster_op.output,
                               train_data, eval_data, target, analyze_op.output,
                               '%s/{{workflow.name}}/transform' % output)

    train_op = TrainerOp('train', project, region, create_cluster_op.output, transform_op.outputs['train'],
                         transform_op.outputs['eval'], target, analyze_op.output, workers,
                         rounds, '%s/{{workflow.name}}/model' % output)

    predict_op = PredictOp('predict', project, region, create_cluster_op.output, transform_op.outputs['eval'],
                           train_op.output, target, analyze_op.output, '%s/{{workflow.name}}/predict' % output)

    confusion_matrix_op = ConfusionMatrixOp('confusion-matrix', predict_op.output,
                                            '%s/{{workflow.name}}/confusionmatrix' % output)

    roc_op = RocOp('roc', predict_op.output, '%s/{{workflow.name}}/roc' % output)

The above pipeline after you've uploaded it

Job

The runtime execution graph of the pipeline

Graph

Outputs from the pipeline

Output

Developer Guide

Clone this wiki locally