Datasets:

Modalities:
Text
Formats:
text
Libraries:
Datasets
License:
wjayesh commited on
Commit
86d48e0
·
verified ·
1 Parent(s): 26d4c90

Upload llms-full.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. llms-full.txt +501 -433
llms-full.txt CHANGED
@@ -1,5 +1,5 @@
1
  This file is a merged representation of the entire codebase, combining all repository files into a single document.
2
- Generated by Repomix on: 2025-01-27T15:50:56.709Z
3
 
4
  ================================================================
5
  File Summary
@@ -285,6 +285,7 @@ docs/
285
  control-caching-behavior.md
286
  control-execution-order-of-steps.md
287
  delete-a-pipeline.md
 
288
  fetching-pipelines.md
289
  get-past-pipeline-step-runs.md
290
  hyper-parameter-tuning.md
@@ -315,12 +316,6 @@ docs/
315
  training-with-gpus/
316
  accelerate-distributed-training.md
317
  README.md
318
- trigger-pipelines/
319
- README.md
320
- use-templates-cli.md
321
- use-templates-dashboard.md
322
- use-templates-python.md
323
- use-templates-rest-api.md
324
  use-configuration-files/
325
  autogenerate-a-template-yaml-file.md
326
  configuration-hierarchy.md
@@ -354,6 +349,12 @@ docs/
354
  set-up-repository.md
355
  interact-with-secrets.md
356
  README.md
 
 
 
 
 
 
357
  debug-and-solve-issues.md
358
  reference/
359
  api-reference.md
@@ -16272,7 +16273,7 @@ The image above shows the hierarchy of concepts in ZenML Pro.
16272
  - [**Teams**](./teams.md) are groups of users within an organization. They help in organizing users and managing access to resources.
16273
  - **Users** are single individual accounts on a ZenML Pro instance.
16274
  - [**Roles**](./roles.md) are used to control what actions users can perform within a tenant or inside an organization.
16275
- - [**Templates**](../../how-to/pipeline-development/trigger-pipelines/README.md) are pipeline runs that can be re-run with a different configuration.
16276
 
16277
  More details about each of these concepts are available in their linked pages below:
16278
 
@@ -16408,7 +16409,7 @@ that expand the functionality of the Open Source product. ZenML Pro adds a manag
16408
  - **User management with teams**: Create [organizations](./organization.md) and [teams](./teams.md) to easily manage users at scale.
16409
  - **Role-based access control and permissions**: Implement fine-grained access control using customizable [roles](./roles.md) to ensure secure and efficient resource management.
16410
  - **Enhanced model and artifact control plane**: Leverage the [Model Control Plane](../../user-guide/starter-guide/track-ml-models.md) and [Artifact Control Plane](../../user-guide/starter-guide/manage-artifacts.md) for improved tracking and management of your ML assets.
16411
- - **Triggers and run templates**: ZenML Pro enables you to [create and run templates](../../how-to/pipeline-development/trigger-pipelines/README.md#run-templates). This way, you can use the dashboard or our Client/REST API to run a pipeline with updated configuration, allowing you to iterate quickly with minimal friction.
16412
  - **Early-access features**: Get early access to pro-specific features such as triggers, filters, sorting, generating usage reports, and more.
16413
 
16414
  Learn more about ZenML Pro on the [ZenML website](https://zenml.io/pro).
@@ -16764,8 +16765,8 @@ Some Pro-only features that you can leverage in your tenant are as follows:
16764
 
16765
  - [Model Control Plane](../../../../docs/book/how-to/model-management-metrics/model-control-plane/register-a-model.md)
16766
  - [Artifact Control Plane](../../how-to/data-artifact-management/handle-data-artifacts/README.md)
16767
- - [Ability to run pipelines from the Dashboard](../../../../docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-rest-api.md),
16768
- - [Create templates out of your pipeline runs](../../../../docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-rest-api.md)
16769
 
16770
  and [more](https://zenml.io/pro)!
16771
 
@@ -19506,7 +19507,7 @@ def example_pipeline():
19506
  example_pipeline()
19507
  ```
19508
 
19509
- You can see another example of using an `UnmaterializedArtifact` when triggering a [pipeline from another](../../pipeline-development/trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline).
19510
 
19511
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
19512
 
@@ -34950,7 +34951,7 @@ def training_pipeline():
34950
  ```
34951
 
34952
  {% hint style="info" %}
34953
- Here we are calling one pipeline from within another pipeline, so functionally the `data_loading_pipeline` is functioning as a step within the `training_pipeline`, i.e. the steps of the former are added to the latter. Only the parent pipeline will be visible in the dashboard. In order to actually trigger a pipeline from another, see [here](../../pipeline-development/trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline)
34954
  {% endhint %}
34955
 
34956
  <table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>Learn more about orchestrators here</td><td></td><td></td><td><a href="../../../component-guide/orchestrators/orchestrators.md">orchestrators.md</a></td></tr></tbody></table>
@@ -34977,9 +34978,9 @@ You can learn more about these options [here](../../pipeline-development/use-con
34977
 
34978
  However, there is one exception: if you would like to trigger a pipeline from the client
34979
  or another pipeline, you would need to pass the `PipelineRunConfiguration` object.
34980
- Learn more about this [here](../../pipeline-development/trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline).
34981
 
34982
- <table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>Using config files</td><td></td><td></td><td><a href="../../pipeline-development/use-configuration-files/README.md">../../pipeline-development/use-configuration-files/README.md</a></td></tr></tbody></table>
34983
 
34984
  <!-- For scarf -->
34985
  <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
@@ -35147,6 +35148,87 @@ Client().delete_pipeline_run(<RUN_NAME_OR_ID>)
35147
  {% endtab %}
35148
  {% endtabs %}
35149
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35150
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
35151
 
35152
  ================
@@ -35561,98 +35643,83 @@ description: Running a hyperparameter tuning trial with ZenML.
35561
 
35562
  # Hyperparameter tuning
35563
 
35564
- {% hint style="warning" %}
35565
- Hyperparameter tuning is not yet a first-class citizen in ZenML, but it is [(high up) on our roadmap of features](https://zenml.featureos.app/p/enable-hyper-parameter-tuning) and will likely receive first-class ZenML support soon. In the meanwhile, the following example shows how hyperparameter tuning can currently be implemented within a ZenML run.
35566
- {% endhint %}
35567
-
35568
- A basic iteration through a number of hyperparameters can be achieved with ZenML by using a simple pipeline like this:
 
 
 
35569
 
35570
  ```python
35571
- @pipeline
35572
- def my_pipeline(step_count: int) -> None:
35573
- data = load_data_step()
35574
- after = []
35575
- for i in range(step_count):
35576
- train_step(data, learning_rate=i * 0.0001, id=f"train_step_{i}")
35577
- after.append(f"train_step_{i}")
35578
- model = select_model_step(..., after=after)
35579
- ```
35580
-
35581
- This is an implementation of a basic grid search (across a single dimension) that would allow for a different learning rate to be used across the same `train_step`. Once that step has been run for all the different learning rates, the `select_model_step` finds which hyperparameters gave the best results or performance.
35582
 
35583
- <details>
35584
 
35585
- <summary>See it in action with the E2E example</summary>
 
35586
 
35587
- _To set up the local environment used below, follow the recommendations from the_ [_Project templates_](../../../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md)_._
35588
 
35589
- In [`pipelines/training.py`](../../../../examples/e2e/pipelines/training.py), you will find a training pipeline with a `Hyperparameter tuning stage` section. It contains a `for` loop that runs the `hp_tuning_single_search` over the configured model search spaces, followed by the `hp_tuning_select_best_model` being executed after all search steps are completed. As a result, we are getting `best_model_config` to be used to train the best possible model later on.
35590
 
35591
- ```python
35592
- ...
35593
- ########## Hyperparameter tuning stage ##########
35594
- after = []
35595
- search_steps_prefix = "hp_tuning_search_"
35596
- for i, model_search_configuration in enumerate(
35597
- MetaConfig.model_search_space
35598
- ):
35599
- step_name = f"{search_steps_prefix}{i}"
35600
- hp_tuning_single_search(
35601
- model_metadata=ExternalArtifact(
35602
- value=model_search_configuration,
35603
- ),
35604
- id=step_name,
35605
- dataset_trn=dataset_trn,
35606
- dataset_tst=dataset_tst,
35607
- target=target,
35608
- )
35609
- after.append(step_name)
35610
- best_model_config = hp_tuning_select_best_model(
35611
- search_steps_prefix=search_steps_prefix, after=after
35612
- )
35613
- ...
35614
- ```
35615
-
35616
- </details>
35617
-
35618
- The main challenge of this implementation is that it is currently not possible to pass a variable number of artifacts into a step programmatically, so the `select_model_step` needs to query all artifacts produced by the previous steps via the ZenML Client instead:
35619
 
35620
- ```python
35621
- from zenml import step, get_step_context
35622
- from zenml.client import Client
35623
 
35624
  @step
35625
- def select_model_step():
35626
  run_name = get_step_context().pipeline_run.name
35627
  run = Client().get_pipeline_run(run_name)
35628
 
35629
- # Fetch all models trained by a 'train_step' before
35630
  trained_models_by_lr = {}
35631
- for step_name, step in run.steps.items():
35632
- if step_name.startswith("train_step"):
35633
- for output_name, output in step.outputs.items():
35634
- if output_name == "<NAME_OF_MODEL_OUTPUT_IN_TRAIN_STEP>":
35635
- model = output.load()
35636
- lr = step.config.parameters["learning_rate"]
35637
- trained_models_by_lr[lr] = model
35638
-
35639
- # Evaluate the models to find the best one
35640
  for lr, model in trained_models_by_lr.items():
35641
- ...
35642
- ```
35643
 
35644
- <details>
35645
 
35646
- <summary>See it in action with the E2E example</summary>
 
 
 
 
 
35647
 
35648
- _To set up the local environment used below, follow the recommendations from the_ [_Project templates_](../../../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md)_._
 
 
 
 
35649
 
35650
- In the `steps/hp_tuning` folder, you will find two step files, which can be used as a starting point for building your own hyperparameter search tailored specifically to your use case:
35651
 
35652
- * [`hp_tuning_single_search(...)`](../../../../examples/e2e/steps/hp_tuning/hp_tuning_single_search.py) is performing a randomized search for the best model hyperparameters in a configured space.
35653
- * [`hp_tuning_select_best_model(...)`](../../../../examples/e2e/steps/hp_tuning/hp_tuning_select_best_model.py) is searching for the best hyperparameters, looping other results of previous random searches to find the best model according to a defined metric.
35654
 
35655
- </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35656
 
35657
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
35658
 
@@ -35752,7 +35819,7 @@ locally or remotely. See our documentation on this [here](../../../getting-start
35752
 
35753
  Check below for more advanced ways to build and interact with your pipeline.
35754
 
35755
- <table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>Configure pipeline/step parameters</td><td></td><td></td><td><a href="use-pipeline-step-parameters.md">use-pipeline-step-parameters.md</a></td></tr><tr><td>Name and annotate step outputs</td><td></td><td></td><td><a href="step-output-typing-and-annotation.md">step-output-typing-and-annotation.md</a></td></tr><tr><td>Control caching behavior</td><td></td><td></td><td><a href="control-caching-behavior.md">control-caching-behavior.md</a></td></tr><tr><td>Run pipeline from a pipeline</td><td></td><td></td><td><a href="../trigger-pipelines/README.md">README.md</a></td></tr><tr><td>Control the execution order of steps</td><td></td><td></td><td><a href="control-execution-order-of-steps.md">control-execution-order-of-steps.md</a></td></tr><tr><td>Customize the step invocation ids</td><td></td><td></td><td><a href="using-a-custom-step-invocation-id.md">using-a-custom-step-invocation-id.md</a></td></tr><tr><td>Name your pipeline runs</td><td></td><td></td><td><a href="name-your-pipeline-runs.md">name-your-pipeline-runs.md</a></td></tr><tr><td>Use failure/success hooks</td><td></td><td></td><td><a href="use-failure-success-hooks.md">use-failure-success-hooks.md</a></td></tr><tr><td>Hyperparameter tuning</td><td></td><td></td><td><a href="hyper-parameter-tuning.md">hyper-parameter-tuning.md</a></td></tr><tr><td>Attach metadata to a step</td><td></td><td></td><td><a href="../../model-management-metrics/track-metrics-metadata/attach-metadata-to-a-step.md">attach-metadata-to-a-step.md</a></td></tr><tr><td>Fetch metadata within steps</td><td></td><td></td><td><a href="../../model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md">fetch-metadata-within-steps.md</a></td></tr><tr><td>Fetch metadata during pipeline composition</td><td></td><td></td><td><a href="../../model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md">fetch-metadata-within-pipeline.md</a></td></tr><tr><td>Enable or disable logs storing</td><td></td><td></td><td><a href="../../control-logging/enable-or-disable-logs-storing.md">enable-or-disable-logs-storing.md</a></td></tr><tr><td>Special Metadata Types</td><td></td><td></td><td><a href="../../model-management-metrics/track-metrics-metadata/logging-metadata.md">logging-metadata.md</a></td></tr><tr><td>Access secrets in a step</td><td></td><td></td><td><a href="access-secrets-in-a-step.md">access-secrets-in-a-step.md</a></td></tr></tbody></table>
35756
 
35757
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
35758
 
@@ -37471,347 +37538,6 @@ to help you out.
37471
 
37472
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
37473
 
37474
- ================
37475
- File: docs/book/how-to/pipeline-development/trigger-pipelines/README.md
37476
- ================
37477
- ---
37478
- icon: bell-concierge
37479
- description: There are numerous ways to trigger a pipeline
37480
- ---
37481
-
37482
- # Trigger a pipeline
37483
-
37484
- In ZenML, the simplest way to execute a run is to use your pipeline function:
37485
-
37486
- ```python
37487
- from zenml import step, pipeline
37488
-
37489
-
37490
- @step # Just add this decorator
37491
- def load_data() -> dict:
37492
- training_data = [[1, 2], [3, 4], [5, 6]]
37493
- labels = [0, 1, 0]
37494
- return {'features': training_data, 'labels': labels}
37495
-
37496
-
37497
- @step
37498
- def train_model(data: dict) -> None:
37499
- total_features = sum(map(sum, data['features']))
37500
- total_labels = sum(data['labels'])
37501
-
37502
- # Train some model here...
37503
-
37504
- print(
37505
- f"Trained model using {len(data['features'])} data points. "
37506
- f"Feature sum is {total_features}, label sum is {total_labels}."
37507
- )
37508
-
37509
-
37510
- @pipeline # This function combines steps together
37511
- def simple_ml_pipeline():
37512
- dataset = load_data()
37513
- train_model(dataset)
37514
-
37515
-
37516
- if __name__ == "__main__":
37517
- simple_ml_pipeline()
37518
- ```
37519
-
37520
- However, there are other ways to trigger a pipeline, specifically a pipeline
37521
- with a remote stack (remote orchestrator, artifact store, and container
37522
- registry).
37523
-
37524
- ## Run Templates
37525
-
37526
- **Run Templates** are pre-defined, parameterized configurations for your ZenML
37527
- pipelines that can be easily executed from the ZenML dashboard or via our
37528
- Client/REST API. Think of them as blueprints for your pipeline runs, ready
37529
- to be customized on the fly.
37530
-
37531
- {% hint style="success" %}
37532
- This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
37533
- [sign up here](https://cloud.zenml.io) to get access.
37534
- {% endhint %}
37535
-
37536
- ![Working with Templates](../../../.gitbook/assets/run-templates.gif)
37537
-
37538
- <table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>Use templates: Python SDK</td><td></td><td></td><td><a href="use-templates-python.md">use-templates-python.md</a></td></tr><tr><td>Use templates: CLI</td><td></td><td></td><td><a href="use-templates-cli.md">use-templates-cli.md</a></td></tr><tr><td>Use templates: Dashboard</td><td></td><td></td><td><a href="use-templates-dashboard.md">use-templates-dashboard.md</a></td></tr><tr><td>Use templates: Rest API</td><td></td><td></td><td><a href="use-templates-rest-api.md">use-templates-rest-api.md</a></td></tr></tbody></table>
37539
- <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
37540
-
37541
- ================
37542
- File: docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-cli.md
37543
- ================
37544
- ---
37545
- description: Create a template using the ZenML CLI
37546
- ---
37547
-
37548
- {% hint style="success" %}
37549
- This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
37550
- [sign up here](https://cloud.zenml.io) to get access.
37551
- {% endhint %}
37552
-
37553
- ## Create a template
37554
-
37555
- You can use the ZenML CLI to create a run template:
37556
-
37557
- ```bash
37558
- # The <PIPELINE_SOURCE_PATH> will be `run.my_pipeline` if you defined a
37559
- # pipeline with name `my_pipeline` in a file called `run.py`
37560
- zenml pipeline create-run-template <PIPELINE_SOURCE_PATH> --name=<TEMPLATE_NAME>
37561
- ```
37562
-
37563
- {% hint style="warning" %}
37564
- You need to have an active **remote stack** while running this command or you can specify
37565
- one with the `--stack` option.
37566
- {% endhint %}
37567
-
37568
-
37569
- <!-- For scarf -->
37570
- <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
37571
-
37572
- ================
37573
- File: docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-dashboard.md
37574
- ================
37575
- ---
37576
- description: Create and run a template over the ZenML Dashboard
37577
- ---
37578
-
37579
- {% hint style="success" %}
37580
- This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
37581
- [sign up here](https://cloud.zenml.io) to get access.
37582
- {% endhint %}
37583
-
37584
- ## Create a template
37585
-
37586
- In order to create a template over the dashboard, go a pipeline run that you
37587
- executed on a remote stack (i.e. at least a remote orchestrator, artifact
37588
- store, and container registry):
37589
-
37590
- ![Create Templates on the dashboard](../../../.gitbook/assets/run-templates-create-1.png)
37591
-
37592
- Click on `+ New Template`, give it a name and click `Create`:
37593
-
37594
- ![Template Details](../../../.gitbook/assets/run-templates-create-2.png)
37595
-
37596
- ## Run a template using the dashboard
37597
-
37598
- In order to run a template from the dashboard:
37599
-
37600
- - You can either click `Run a Pipeline` on the main `Pipelines` page, or
37601
- - You can go to a specific template page and click on `Run Template`.
37602
-
37603
- Either way, you will be forwarded to a page where you will see the
37604
- `Run Details`. Here, you have the option to upload a `.yaml` [configurations file](../../pipeline-development/use-configuration-files/README.md)
37605
- or change the configuration on the go by using our editor.
37606
-
37607
- ![Run Details](../../../.gitbook/assets/run-templates-run-1.png)
37608
-
37609
- Once you run the template, a new run will be executed on the same stack as
37610
- the original run was executed on.
37611
-
37612
- <!-- For scarf -->
37613
- <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
37614
-
37615
- ================
37616
- File: docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-python.md
37617
- ================
37618
- ---
37619
- description: Create and run a template using the ZenML Python SDK
37620
- ---
37621
-
37622
- {% hint style="success" %}
37623
- This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
37624
- [sign up here](https://cloud.zenml.io) to get access.
37625
- {% endhint %}
37626
-
37627
- ## Create a template
37628
-
37629
- You can use the ZenML client to create a run template:
37630
-
37631
- ```python
37632
- from zenml.client import Client
37633
-
37634
- run = Client().get_pipeline_run(<RUN_NAME_OR_ID>)
37635
-
37636
- Client().create_run_template(
37637
- name=<TEMPLATE_NAME>,
37638
- deployment_id=run.deployment_id
37639
- )
37640
- ```
37641
-
37642
- {% hint style="warning" %}
37643
- You need to select **a pipeline run that was executed on a remote stack**
37644
- (i.e. at least a remote orchestrator, artifact store, and container registry)
37645
- {% endhint %}
37646
-
37647
-
37648
- You can also create a template directly from your pipeline definition by running the
37649
- following code while having a **remote stack** active:
37650
- ```python
37651
- from zenml import pipeline
37652
-
37653
- @pipeline
37654
- def my_pipeline():
37655
- ...
37656
-
37657
- template = my_pipeline.create_run_template(name=<TEMPLATE_NAME>)
37658
- ```
37659
-
37660
- ## Run a template
37661
-
37662
- You can use the ZenML client to run a template:
37663
-
37664
- ```python
37665
- from zenml.client import Client
37666
-
37667
- template = Client().get_run_template(<TEMPLATE_NAME>)
37668
-
37669
- config = template.config_template
37670
-
37671
- # [OPTIONAL] ---- modify the config here ----
37672
-
37673
- Client().trigger_pipeline(
37674
- template_id=template.id,
37675
- run_configuration=config,
37676
- )
37677
- ```
37678
-
37679
- Once you trigger the template, a new run will be executed on the same stack as
37680
- the original run was executed on.
37681
-
37682
- ## Advanced Usage: Run a template from another pipeline
37683
-
37684
- It is also possible to use the same logic to run a pipeline within another
37685
- pipeline:
37686
-
37687
- ```python
37688
- import pandas as pd
37689
-
37690
- from zenml import pipeline, step
37691
- from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact
37692
- from zenml.artifacts.utils import load_artifact
37693
- from zenml.client import Client
37694
- from zenml.config.pipeline_run_configuration import PipelineRunConfiguration
37695
-
37696
-
37697
- @step
37698
- def trainer(data_artifact_id: str):
37699
- df = load_artifact(data_artifact_id)
37700
-
37701
-
37702
- @pipeline
37703
- def training_pipeline():
37704
- trainer()
37705
-
37706
-
37707
- @step
37708
- def load_data() -> pd.Dataframe:
37709
- ...
37710
-
37711
-
37712
- @step
37713
- def trigger_pipeline(df: UnmaterializedArtifact):
37714
- # By using UnmaterializedArtifact we can get the ID of the artifact
37715
- run_config = PipelineRunConfiguration(
37716
- steps={"trainer": {"parameters": {"data_artifact_id": df.id}}}
37717
- )
37718
-
37719
- Client().trigger_pipeline("training_pipeline", run_configuration=run_config)
37720
-
37721
-
37722
- @pipeline
37723
- def loads_data_and_triggers_training():
37724
- df = load_data()
37725
- trigger_pipeline(df) # Will trigger the other pipeline
37726
- ```
37727
-
37728
- Read more about the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) function object in the [SDK Docs](https://sdkdocs.zenml.io/).
37729
-
37730
- Read more about Unmaterialized Artifacts [here](../../data-artifact-management/complex-usecases/unmaterialized-artifacts.md).
37731
-
37732
- <!-- For scarf -->
37733
- <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
37734
-
37735
- ================
37736
- File: docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-rest-api.md
37737
- ================
37738
- ---
37739
- description: Create and run a template over the ZenML Rest API
37740
- ---
37741
-
37742
- {% hint style="success" %}
37743
- This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
37744
- [sign up here](https://cloud.zenml.io) to get access.
37745
- {% endhint %}
37746
-
37747
- ## Run a template
37748
-
37749
- Triggering a pipeline from the REST API **only** works if you've created at
37750
- least one run template for that pipeline.
37751
-
37752
- As a pre-requisite, you need a pipeline name. After you have it, there are
37753
- three calls that need to be made in order to trigger a pipeline from the
37754
- REST API:
37755
-
37756
- 1. `GET /pipelines?name=<PIPELINE_NAME>` -> This returns a response, where a <PIPELINE_ID> can be copied
37757
- 2. `GET /run_templates?pipeline_id=<PIPELINE_ID>` -> This returns a list of responses where a <TEMPLATE_ID> can be chosen
37758
- 3. `POST /run_templates/<TEMPLATE_ID>/runs` -> This runs the pipeline. You can pass the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) in the body
37759
-
37760
- ## A working example
37761
-
37762
- {% hint style="info" %}
37763
- Learn how to get a bearer token for the curl commands
37764
- [here](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically).
37765
- {% endhint %}
37766
-
37767
- Here is an example. Let's say would we like to re-run a pipeline called
37768
- `training`. We first query the `/pipelines` endpoint:
37769
-
37770
- ```shell
37771
- curl -X 'GET' \
37772
- '<YOUR_ZENML_SERVER_URL>/api/v1/pipelines?hydrate=false&name=training' \
37773
- -H 'accept: application/json' \
37774
- -H 'Authorization: Bearer <YOUR_TOKEN>'
37775
- ```
37776
-
37777
- <figure><img src="../../../.gitbook/assets/rest_api_step_1.png" alt=""><figcaption><p>Identifying the pipeline ID</p></figcaption></figure>
37778
-
37779
- We can take the ID from any object in the list of responses. In this case,
37780
- the <PIPELINE_ID> is `c953985e-650a-4cbf-a03a-e49463f58473` in the response.
37781
-
37782
- After this, we take the pipeline ID and call the`/run_templates?pipeline_id=<PIPELINE_ID>` API:
37783
-
37784
- ```shell
37785
- curl -X 'GET' \
37786
- '<YOUR_ZENML_SERVER_URL>/api/v1/run_templates?hydrate=false&logical_operator=and&page=1&size=20&pipeline_id=b826b714-a9b3-461c-9a6e-1bde3df3241d' \
37787
- -H 'accept: application/json' \
37788
- -H 'Authorization: Bearer <YOUR_TOKEN>'
37789
- ```
37790
-
37791
- We can now take the <TEMPLATE_ID> from this response. Here it is `b826b714-a9b3-461c-9a6e-1bde3df3241d`.
37792
-
37793
- <figure><img src="../../../.gitbook/assets/rest_api_step_2.png" alt=""><figcaption><p>Identifying the template ID</p></figcaption></figure>
37794
-
37795
- Finally, we can use the template ID to trigger the pipeline with a different
37796
- configuration:
37797
-
37798
- ```shell
37799
- curl -X 'POST' \
37800
- '<YOUR_ZENML_SERVER_URL>/api/v1/run_templates/b826b714-a9b3-461c-9a6e-1bde3df3241d/runs' \
37801
- -H 'accept: application/json' \
37802
- -H 'Content-Type: application/json' \
37803
- -H 'Authorization: Bearer <YOUR_TOKEN>' \
37804
- -d '{
37805
- "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}}
37806
- }'
37807
- ```
37808
-
37809
- A positive response means your pipeline has been re-triggered with a
37810
- different config!
37811
-
37812
- <!-- For scarf -->
37813
- <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
37814
-
37815
  ================
37816
  File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md
37817
  ================
@@ -40802,6 +40528,347 @@ icon: building-columns
40802
 
40803
  This section covers all aspects of setting up and managing ZenML projects.
40804
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40805
  ================
40806
  File: docs/book/how-to/debug-and-solve-issues.md
40807
  ================
@@ -48714,6 +48781,7 @@ File: docs/book/toc.md
48714
  * [Name your pipeline runs](how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md)
48715
  * [Tag your pipeline runs](how-to/pipeline-development/build-pipelines/tag-your-pipeline-runs.md)
48716
  * [Use failure/success hooks](how-to/pipeline-development/build-pipelines/use-failure-success-hooks.md)
 
48717
  * [Hyperparameter tuning](how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md)
48718
  * [Access secrets in a step](how-to/pipeline-development/build-pipelines/access-secrets-in-a-step.md)
48719
  * [Run an individual step](how-to/pipeline-development/build-pipelines/run-an-individual-step.md)
@@ -48722,11 +48790,6 @@ File: docs/book/toc.md
48722
  * [Develop locally](how-to/pipeline-development/develop-locally/README.md)
48723
  * [Use config files to develop locally](how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md)
48724
  * [Keep your pipelines and dashboard clean](how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md)
48725
- * [Trigger a pipeline](how-to/pipeline-development/trigger-pipelines/README.md)
48726
- * [Use templates: Python SDK](how-to/pipeline-development/trigger-pipelines/use-templates-python.md)
48727
- * [Use templates: CLI](how-to/pipeline-development/trigger-pipelines/use-templates-cli.md)
48728
- * [Use templates: Dashboard](how-to/pipeline-development/trigger-pipelines/use-templates-dashboard.md)
48729
- * [Use templates: Rest API](how-to/pipeline-development/trigger-pipelines/use-templates-rest-api.md)
48730
  * [Use configuration files](how-to/pipeline-development/use-configuration-files/README.md)
48731
  * [How to configure a pipeline with a YAML](how-to/pipeline-development/use-configuration-files/how-to-use-config.md)
48732
  * [What can be configured](how-to/pipeline-development/use-configuration-files/what-can-be-configured.md)
@@ -48742,6 +48805,11 @@ File: docs/book/toc.md
48742
  * [Configure Python environments](how-to/pipeline-development/configure-python-environments/README.md)
48743
  * [Handling dependencies](how-to/pipeline-development/configure-python-environments/handling-dependencies.md)
48744
  * [Configure the server environment](how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md)
 
 
 
 
 
48745
  * [Customize Docker builds](how-to/customize-docker-builds/README.md)
48746
  * [Docker settings on a pipeline](how-to/customize-docker-builds/docker-settings-on-a-pipeline.md)
48747
  * [Docker settings on a step](how-to/customize-docker-builds/docker-settings-on-a-step.md)
 
1
  This file is a merged representation of the entire codebase, combining all repository files into a single document.
2
+ Generated by Repomix on: 2025-01-30T10:25:41.954Z
3
 
4
  ================================================================
5
  File Summary
 
285
  control-caching-behavior.md
286
  control-execution-order-of-steps.md
287
  delete-a-pipeline.md
288
+ fan-in-fan-out.md
289
  fetching-pipelines.md
290
  get-past-pipeline-step-runs.md
291
  hyper-parameter-tuning.md
 
316
  training-with-gpus/
317
  accelerate-distributed-training.md
318
  README.md
 
 
 
 
 
 
319
  use-configuration-files/
320
  autogenerate-a-template-yaml-file.md
321
  configuration-hierarchy.md
 
349
  set-up-repository.md
350
  interact-with-secrets.md
351
  README.md
352
+ trigger-pipelines/
353
+ README.md
354
+ use-templates-cli.md
355
+ use-templates-dashboard.md
356
+ use-templates-python.md
357
+ use-templates-rest-api.md
358
  debug-and-solve-issues.md
359
  reference/
360
  api-reference.md
 
16273
  - [**Teams**](./teams.md) are groups of users within an organization. They help in organizing users and managing access to resources.
16274
  - **Users** are single individual accounts on a ZenML Pro instance.
16275
  - [**Roles**](./roles.md) are used to control what actions users can perform within a tenant or inside an organization.
16276
+ - [**Templates**](../../how-to/trigger-pipelines/README.md) are pipeline runs that can be re-run with a different configuration.
16277
 
16278
  More details about each of these concepts are available in their linked pages below:
16279
 
 
16409
  - **User management with teams**: Create [organizations](./organization.md) and [teams](./teams.md) to easily manage users at scale.
16410
  - **Role-based access control and permissions**: Implement fine-grained access control using customizable [roles](./roles.md) to ensure secure and efficient resource management.
16411
  - **Enhanced model and artifact control plane**: Leverage the [Model Control Plane](../../user-guide/starter-guide/track-ml-models.md) and [Artifact Control Plane](../../user-guide/starter-guide/manage-artifacts.md) for improved tracking and management of your ML assets.
16412
+ - **Triggers and run templates**: ZenML Pro enables you to [create and run templates](../../how-to/trigger-pipelines/README.md#run-templates). This way, you can use the dashboard or our Client/REST API to run a pipeline with updated configuration, allowing you to iterate quickly with minimal friction.
16413
  - **Early-access features**: Get early access to pro-specific features such as triggers, filters, sorting, generating usage reports, and more.
16414
 
16415
  Learn more about ZenML Pro on the [ZenML website](https://zenml.io/pro).
 
16765
 
16766
  - [Model Control Plane](../../../../docs/book/how-to/model-management-metrics/model-control-plane/register-a-model.md)
16767
  - [Artifact Control Plane](../../how-to/data-artifact-management/handle-data-artifacts/README.md)
16768
+ - [Ability to run pipelines from the Dashboard](../../../../docs/book/how-to/trigger-pipelines/use-templates-rest-api.md),
16769
+ - [Create templates out of your pipeline runs](../../../../docs/book/how-to/trigger-pipelines/use-templates-rest-api.md)
16770
 
16771
  and [more](https://zenml.io/pro)!
16772
 
 
19507
  example_pipeline()
19508
  ```
19509
 
19510
+ You can see another example of using an `UnmaterializedArtifact` when triggering a [pipeline from another](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline).
19511
 
19512
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
19513
 
 
34951
  ```
34952
 
34953
  {% hint style="info" %}
34954
+ Here we are calling one pipeline from within another pipeline, so functionally the `data_loading_pipeline` is functioning as a step within the `training_pipeline`, i.e. the steps of the former are added to the latter. Only the parent pipeline will be visible in the dashboard. In order to actually trigger a pipeline from another, see [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline)
34955
  {% endhint %}
34956
 
34957
  <table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>Learn more about orchestrators here</td><td></td><td></td><td><a href="../../../component-guide/orchestrators/orchestrators.md">orchestrators.md</a></td></tr></tbody></table>
 
34978
 
34979
  However, there is one exception: if you would like to trigger a pipeline from the client
34980
  or another pipeline, you would need to pass the `PipelineRunConfiguration` object.
34981
+ Learn more about this [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline).
34982
 
34983
+ <table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>Using config files</td><td></td><td></td><td><a href="../../use-configuration-files/README.md">../../pipeline-development/use-configuration-files/README.md</a></td></tr></tbody></table>
34984
 
34985
  <!-- For scarf -->
34986
  <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
 
35148
  {% endtab %}
35149
  {% endtabs %}
35150
 
35151
+ <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
35152
+
35153
+ ================
35154
+ File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md
35155
+ ================
35156
+ ---
35157
+ description: Running steps in parallel.
35158
+ ---
35159
+
35160
+ # Fan-in and Fan-out Patterns
35161
+
35162
+ The fan-out/fan-in pattern is a common pipeline architecture where a single step splits into multiple parallel operations (fan-out) and then consolidates the results back into a single step (fan-in). This pattern is particularly useful for parallel processing, distributed workloads, or when you need to process data through different transformations and then aggregate the results. For example, you might want to process different chunks of data in parallel and then aggregate the results:
35163
+
35164
+ ```python
35165
+ from zenml import step, get_step_context, pipeline
35166
+ from zenml.client import Client
35167
+
35168
+
35169
+ @step
35170
+ def load_step() -> str:
35171
+ return "Hello from ZenML!"
35172
+
35173
+
35174
+ @step
35175
+ def process_step(input_data: str) -> str:
35176
+ return input_data
35177
+
35178
+
35179
+ @step
35180
+ def combine_step(step_prefix: str, output_name: str) -> None:
35181
+ run_name = get_step_context().pipeline_run.name
35182
+ run = Client().get_pipeline_run(run_name)
35183
+
35184
+ # Fetch all results from parallel processing steps
35185
+ processed_results = {}
35186
+ for step_name, step_info in run.steps.items():
35187
+ if step_name.startswith(step_prefix):
35188
+ output = step_info.outputs[output_name][0]
35189
+ processed_results[step_info.name] = output.load()
35190
+
35191
+ # Combine all results
35192
+ print(",".join([f"{k}: {v}" for k, v in processed_results.items()]))
35193
+
35194
+
35195
+ @pipeline(enable_cache=False)
35196
+ def fan_out_fan_in_pipeline(parallel_count: int) -> None:
35197
+ # Initial step (source)
35198
+ input_data = load_step()
35199
+
35200
+ # Fan out: Process data in parallel branches
35201
+ after = []
35202
+ for i in range(parallel_count):
35203
+ _ = process_step(input_data, id=f"process_{i}")
35204
+ after.append(f"process_{i}")
35205
+
35206
+ # Fan in: Combine results from all parallel branches
35207
+ combine_step(step_prefix="process_", output_name="output", after=after)
35208
+
35209
+
35210
+ fan_out_fan_in_pipeline(parallel_count=8)
35211
+ ```
35212
+
35213
+ The fan-out pattern allows for parallel processing and better resource utilization, while the fan-in pattern enables aggregation and consolidation of results. This is particularly useful for:
35214
+
35215
+ - Parallel data processing
35216
+ - Distributed model training
35217
+ - Ensemble methods
35218
+ - Batch processing
35219
+ - Data validation across multiple sources
35220
+ - [Hyperparameter tuning](./hyper-parameter-tuning.md)
35221
+
35222
+ Note that when implementing the fan-in step, you'll need to use the ZenML Client to query the results from previous parallel steps, as shown in the example above, and you can't pass in the result directly.
35223
+
35224
+ {% hint style="warning" %}
35225
+ The fan-in, fan-out method has the following limitations:
35226
+
35227
+ 1. Steps run sequentially rather than in parallel if the underlying orchestrator does not support parallel step runs (e.g. with the local orchestrator)
35228
+ 2. The number of steps need to be known ahead-of-time, and ZenML does not yet support the ability to dynamically create steps on the fly.
35229
+ {% endhint %}
35230
+
35231
+
35232
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
35233
 
35234
  ================
 
35643
 
35644
  # Hyperparameter tuning
35645
 
35646
+ A basic iteration through a number of hyperparameters can be achieved with
35647
+ ZenML by using a simple pipeline. The following example showcases an
35648
+ implementation of a basic grid search (across a single dimension)
35649
+ that would allow for a different learning rate to be used across the
35650
+ same `train_step`. Once that step has been run for all the different
35651
+ learning rates, the `selection_step` finds which hyperparameters gave the
35652
+ best results or performance. It utilizes the [fan-in, fan-out method of
35653
+ building a pipeline.](./fan-in-fan-out.md)
35654
 
35655
  ```python
35656
+ from typing import Annotated
 
 
 
 
 
 
 
 
 
 
35657
 
35658
+ from sklearn.base import ClassifierMixin
35659
 
35660
+ from zenml import step, pipeline, get_step_context
35661
+ from zenml.client import Client
35662
 
35663
+ model_output_name = "my_model"
35664
 
 
35665
 
35666
+ @step
35667
+ def train_step(
35668
+ learning_rate: float
35669
+ ) -> Annotated[ClassifierMixin, model_output_name]:
35670
+ return ... # Train a model with the learning rate and return it here.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35671
 
 
 
 
35672
 
35673
  @step
35674
+ def selection_step(step_prefix: str, output_name: str) -> None:
35675
  run_name = get_step_context().pipeline_run.name
35676
  run = Client().get_pipeline_run(run_name)
35677
 
 
35678
  trained_models_by_lr = {}
35679
+ for step_name, step_info in run.steps.items():
35680
+ if step_name.startswith(step_prefix):
35681
+ model = step_info.outputs[output_name][0].load()
35682
+ lr = step_info.config.parameters["learning_rate"]
35683
+ trained_models_by_lr[lr] = model
35684
+
 
 
 
35685
  for lr, model in trained_models_by_lr.items():
35686
+ ... # Evaluate the models to find the best one
 
35687
 
 
35688
 
35689
+ @pipeline
35690
+ def my_pipeline(step_count: int) -> None:
35691
+ after = []
35692
+ for i in range(step_count):
35693
+ train_step(learning_rate=i * 0.0001, id=f"train_step_{i}")
35694
+ after.append(f"train_step_{i}")
35695
 
35696
+ selection_step(
35697
+ step_prefix="train_step_",
35698
+ output_name=model_output_name,
35699
+ after=after
35700
+ )
35701
 
 
35702
 
35703
+ my_pipeline(step_count=4)
35704
+ ```
35705
 
35706
+ {% hint style="warning" %}
35707
+ The main challenge of this implementation is that it is currently not
35708
+ possible to pass a variable number of artifacts into a step programmatically,
35709
+ so the `selection_step` needs to query all artifacts produced by the previous
35710
+ steps via the ZenML Client instead.
35711
+ {% endhint %}
35712
+
35713
+ {% hint style="info" %}
35714
+ You can also see this in action with the [E2E example](https://github.com/zenml-io/zenml/tree/main/examples/e2e).
35715
+
35716
+ In the `steps/hp_tuning` folder, you will find two step files, that can be
35717
+ used as a starting point for building your own hyperparameter search tailored
35718
+ specifically to your use case:
35719
+
35720
+ * [`hp_tuning_single_search(...)`](https://github.com/zenml-io/zenml/blob/main/examples/e2e/steps/hp_tuning/hp_tuning_single_search.py) is performing a randomized search for the best model hyperparameters in a configured space.
35721
+ * [`hp_tuning_select_best_model(...)`](https://github.com/zenml-io/zenml/blob/main/examples/e2e/steps/hp_tuning/hp_tuning_select_best_model.py) is searching for the best hyperparameters, looping other results of previous random searches to find the best model according to a defined metric.
35722
+ {% endhint %}
35723
 
35724
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
35725
 
 
35819
 
35820
  Check below for more advanced ways to build and interact with your pipeline.
35821
 
35822
+ <table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>Configure pipeline/step parameters</td><td></td><td></td><td><a href="use-pipeline-step-parameters.md">use-pipeline-step-parameters.md</a></td></tr><tr><td>Name and annotate step outputs</td><td></td><td></td><td><a href="step-output-typing-and-annotation.md">step-output-typing-and-annotation.md</a></td></tr><tr><td>Control caching behavior</td><td></td><td></td><td><a href="control-caching-behavior.md">control-caching-behavior.md</a></td></tr><tr><td>Customize the step invocation ids</td><td></td><td></td><td><a href="using-a-custom-step-invocation-id.md">using-a-custom-step-invocation-id.md</a></td></tr><tr><td>Name your pipeline runs</td><td></td><td></td><td><a href="name-your-pipeline-runs.md">name-your-pipeline-runs.md</a></td></tr><tr><td>Use failure/success hooks</td><td></td><td></td><td><a href="use-failure-success-hooks.md">use-failure-success-hooks.md</a></td></tr><tr><td>Hyperparameter tuning</td><td></td><td></td><td><a href="hyper-parameter-tuning.md">hyper-parameter-tuning.md</a></td></tr><tr><td>Attach metadata to a step</td><td></td><td></td><td><a href="../../model-management-metrics/track-metrics-metadata/attach-metadata-to-a-step.md">attach-metadata-to-a-step.md</a></td></tr><tr><td>Fetch metadata within steps</td><td></td><td></td><td><a href="../../model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md">fetch-metadata-within-steps.md</a></td></tr><tr><td>Fetch metadata during pipeline composition</td><td></td><td></td><td><a href="../../model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md">fetch-metadata-within-pipeline.md</a></td></tr><tr><td>Enable or disable logs storing</td><td></td><td></td><td><a href="../../control-logging/enable-or-disable-logs-storing.md">enable-or-disable-logs-storing.md</a></td></tr><tr><td>Special Metadata Types</td><td></td><td></td><td><a href="../../model-management-metrics/track-metrics-metadata/logging-metadata.md">logging-metadata.md</a></td></tr><tr><td>Access secrets in a step</td><td></td><td></td><td><a href="access-secrets-in-a-step.md">access-secrets-in-a-step.md</a></td></tr></tbody></table>
35823
 
35824
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
35825
 
 
37538
 
37539
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
37540
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37541
  ================
37542
  File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md
37543
  ================
 
40528
 
40529
  This section covers all aspects of setting up and managing ZenML projects.
40530
 
40531
+ ================
40532
+ File: docs/book/how-to/trigger-pipelines/README.md
40533
+ ================
40534
+ ---
40535
+ icon: bell-concierge
40536
+ description: There are numerous ways to trigger a pipeline
40537
+ ---
40538
+
40539
+ # Trigger a pipeline (Run Templates)
40540
+
40541
+ In ZenML, the simplest way to execute a run is to use your pipeline function:
40542
+
40543
+ ```python
40544
+ from zenml import step, pipeline
40545
+
40546
+
40547
+ @step # Just add this decorator
40548
+ def load_data() -> dict:
40549
+ training_data = [[1, 2], [3, 4], [5, 6]]
40550
+ labels = [0, 1, 0]
40551
+ return {'features': training_data, 'labels': labels}
40552
+
40553
+
40554
+ @step
40555
+ def train_model(data: dict) -> None:
40556
+ total_features = sum(map(sum, data['features']))
40557
+ total_labels = sum(data['labels'])
40558
+
40559
+ # Train some model here...
40560
+
40561
+ print(
40562
+ f"Trained model using {len(data['features'])} data points. "
40563
+ f"Feature sum is {total_features}, label sum is {total_labels}."
40564
+ )
40565
+
40566
+
40567
+ @pipeline # This function combines steps together
40568
+ def simple_ml_pipeline():
40569
+ dataset = load_data()
40570
+ train_model(dataset)
40571
+
40572
+
40573
+ if __name__ == "__main__":
40574
+ simple_ml_pipeline()
40575
+ ```
40576
+
40577
+ However, there are other ways to trigger a pipeline, specifically a pipeline
40578
+ with a remote stack (remote orchestrator, artifact store, and container
40579
+ registry).
40580
+
40581
+ ## Run Templates
40582
+
40583
+ **Run Templates** are pre-defined, parameterized configurations for your ZenML
40584
+ pipelines that can be easily executed from the ZenML dashboard or via our
40585
+ Client/REST API. Think of them as blueprints for your pipeline runs, ready
40586
+ to be customized on the fly.
40587
+
40588
+ {% hint style="success" %}
40589
+ This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
40590
+ [sign up here](https://cloud.zenml.io) to get access.
40591
+ {% endhint %}
40592
+
40593
+ ![Working with Templates](../../../.gitbook/assets/run-templates.gif)
40594
+
40595
+ <table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>Use templates: Python SDK</td><td></td><td></td><td><a href="use-templates-python.md">use-templates-python.md</a></td></tr><tr><td>Use templates: CLI</td><td></td><td></td><td><a href="use-templates-cli.md">use-templates-cli.md</a></td></tr><tr><td>Use templates: Dashboard</td><td></td><td></td><td><a href="use-templates-dashboard.md">use-templates-dashboard.md</a></td></tr><tr><td>Use templates: Rest API</td><td></td><td></td><td><a href="use-templates-rest-api.md">use-templates-rest-api.md</a></td></tr></tbody></table>
40596
+ <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
40597
+
40598
+ ================
40599
+ File: docs/book/how-to/trigger-pipelines/use-templates-cli.md
40600
+ ================
40601
+ ---
40602
+ description: Create a template using the ZenML CLI
40603
+ ---
40604
+
40605
+ {% hint style="success" %}
40606
+ This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
40607
+ [sign up here](https://cloud.zenml.io) to get access.
40608
+ {% endhint %}
40609
+
40610
+ ## Create a template
40611
+
40612
+ You can use the ZenML CLI to create a run template:
40613
+
40614
+ ```bash
40615
+ # The <PIPELINE_SOURCE_PATH> will be `run.my_pipeline` if you defined a
40616
+ # pipeline with name `my_pipeline` in a file called `run.py`
40617
+ zenml pipeline create-run-template <PIPELINE_SOURCE_PATH> --name=<TEMPLATE_NAME>
40618
+ ```
40619
+
40620
+ {% hint style="warning" %}
40621
+ You need to have an active **remote stack** while running this command or you can specify
40622
+ one with the `--stack` option.
40623
+ {% endhint %}
40624
+
40625
+
40626
+ <!-- For scarf -->
40627
+ <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
40628
+
40629
+ ================
40630
+ File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md
40631
+ ================
40632
+ ---
40633
+ description: Create and run a template over the ZenML Dashboard
40634
+ ---
40635
+
40636
+ {% hint style="success" %}
40637
+ This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
40638
+ [sign up here](https://cloud.zenml.io) to get access.
40639
+ {% endhint %}
40640
+
40641
+ ## Create a template
40642
+
40643
+ In order to create a template over the dashboard, go a pipeline run that you
40644
+ executed on a remote stack (i.e. at least a remote orchestrator, artifact
40645
+ store, and container registry):
40646
+
40647
+ ![Create Templates on the dashboard](../../../.gitbook/assets/run-templates-create-1.png)
40648
+
40649
+ Click on `+ New Template`, give it a name and click `Create`:
40650
+
40651
+ ![Template Details](../../../.gitbook/assets/run-templates-create-2.png)
40652
+
40653
+ ## Run a template using the dashboard
40654
+
40655
+ In order to run a template from the dashboard:
40656
+
40657
+ - You can either click `Run a Pipeline` on the main `Pipelines` page, or
40658
+ - You can go to a specific template page and click on `Run Template`.
40659
+
40660
+ Either way, you will be forwarded to a page where you will see the
40661
+ `Run Details`. Here, you have the option to upload a `.yaml` [configurations file](../pipeline-development/use-configuration-files/README.md)
40662
+ or change the configuration on the go by using our editor.
40663
+
40664
+ ![Run Details](../../../.gitbook/assets/run-templates-run-1.png)
40665
+
40666
+ Once you run the template, a new run will be executed on the same stack as
40667
+ the original run was executed on.
40668
+
40669
+ <!-- For scarf -->
40670
+ <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
40671
+
40672
+ ================
40673
+ File: docs/book/how-to/trigger-pipelines/use-templates-python.md
40674
+ ================
40675
+ ---
40676
+ description: Create and run a template using the ZenML Python SDK
40677
+ ---
40678
+
40679
+ {% hint style="success" %}
40680
+ This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
40681
+ [sign up here](https://cloud.zenml.io) to get access.
40682
+ {% endhint %}
40683
+
40684
+ ## Create a template
40685
+
40686
+ You can use the ZenML client to create a run template:
40687
+
40688
+ ```python
40689
+ from zenml.client import Client
40690
+
40691
+ run = Client().get_pipeline_run(<RUN_NAME_OR_ID>)
40692
+
40693
+ Client().create_run_template(
40694
+ name=<TEMPLATE_NAME>,
40695
+ deployment_id=run.deployment_id
40696
+ )
40697
+ ```
40698
+
40699
+ {% hint style="warning" %}
40700
+ You need to select **a pipeline run that was executed on a remote stack**
40701
+ (i.e. at least a remote orchestrator, artifact store, and container registry)
40702
+ {% endhint %}
40703
+
40704
+
40705
+ You can also create a template directly from your pipeline definition by running the
40706
+ following code while having a **remote stack** active:
40707
+ ```python
40708
+ from zenml import pipeline
40709
+
40710
+ @pipeline
40711
+ def my_pipeline():
40712
+ ...
40713
+
40714
+ template = my_pipeline.create_run_template(name=<TEMPLATE_NAME>)
40715
+ ```
40716
+
40717
+ ## Run a template
40718
+
40719
+ You can use the ZenML client to run a template:
40720
+
40721
+ ```python
40722
+ from zenml.client import Client
40723
+
40724
+ template = Client().get_run_template(<TEMPLATE_NAME>)
40725
+
40726
+ config = template.config_template
40727
+
40728
+ # [OPTIONAL] ---- modify the config here ----
40729
+
40730
+ Client().trigger_pipeline(
40731
+ template_id=template.id,
40732
+ run_configuration=config,
40733
+ )
40734
+ ```
40735
+
40736
+ Once you trigger the template, a new run will be executed on the same stack as
40737
+ the original run was executed on.
40738
+
40739
+ ## Advanced Usage: Run a template from another pipeline
40740
+
40741
+ It is also possible to use the same logic to run a pipeline within another
40742
+ pipeline:
40743
+
40744
+ ```python
40745
+ import pandas as pd
40746
+
40747
+ from zenml import pipeline, step
40748
+ from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact
40749
+ from zenml.artifacts.utils import load_artifact
40750
+ from zenml.client import Client
40751
+ from zenml.config.pipeline_run_configuration import PipelineRunConfiguration
40752
+
40753
+
40754
+ @step
40755
+ def trainer(data_artifact_id: str):
40756
+ df = load_artifact(data_artifact_id)
40757
+
40758
+
40759
+ @pipeline
40760
+ def training_pipeline():
40761
+ trainer()
40762
+
40763
+
40764
+ @step
40765
+ def load_data() -> pd.Dataframe:
40766
+ ...
40767
+
40768
+
40769
+ @step
40770
+ def trigger_pipeline(df: UnmaterializedArtifact):
40771
+ # By using UnmaterializedArtifact we can get the ID of the artifact
40772
+ run_config = PipelineRunConfiguration(
40773
+ steps={"trainer": {"parameters": {"data_artifact_id": df.id}}}
40774
+ )
40775
+
40776
+ Client().trigger_pipeline("training_pipeline", run_configuration=run_config)
40777
+
40778
+
40779
+ @pipeline
40780
+ def loads_data_and_triggers_training():
40781
+ df = load_data()
40782
+ trigger_pipeline(df) # Will trigger the other pipeline
40783
+ ```
40784
+
40785
+ Read more about the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) function object in the [SDK Docs](https://sdkdocs.zenml.io/).
40786
+
40787
+ Read more about Unmaterialized Artifacts [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md).
40788
+
40789
+ <!-- For scarf -->
40790
+ <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
40791
+
40792
+ ================
40793
+ File: docs/book/how-to/trigger-pipelines/use-templates-rest-api.md
40794
+ ================
40795
+ ---
40796
+ description: Create and run a template over the ZenML Rest API
40797
+ ---
40798
+
40799
+ {% hint style="success" %}
40800
+ This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please
40801
+ [sign up here](https://cloud.zenml.io) to get access.
40802
+ {% endhint %}
40803
+
40804
+ ## Run a template
40805
+
40806
+ Triggering a pipeline from the REST API **only** works if you've created at
40807
+ least one run template for that pipeline.
40808
+
40809
+ As a pre-requisite, you need a pipeline name. After you have it, there are
40810
+ three calls that need to be made in order to trigger a pipeline from the
40811
+ REST API:
40812
+
40813
+ 1. `GET /pipelines?name=<PIPELINE_NAME>` -> This returns a response, where a <PIPELINE_ID> can be copied
40814
+ 2. `GET /run_templates?pipeline_id=<PIPELINE_ID>` -> This returns a list of responses where a <TEMPLATE_ID> can be chosen
40815
+ 3. `POST /run_templates/<TEMPLATE_ID>/runs` -> This runs the pipeline. You can pass the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) in the body
40816
+
40817
+ ## A working example
40818
+
40819
+ {% hint style="info" %}
40820
+ Learn how to get a bearer token for the curl commands
40821
+ [here](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically).
40822
+ {% endhint %}
40823
+
40824
+ Here is an example. Let's say would we like to re-run a pipeline called
40825
+ `training`. We first query the `/pipelines` endpoint:
40826
+
40827
+ ```shell
40828
+ curl -X 'GET' \
40829
+ '<YOUR_ZENML_SERVER_URL>/api/v1/pipelines?hydrate=false&name=training' \
40830
+ -H 'accept: application/json' \
40831
+ -H 'Authorization: Bearer <YOUR_TOKEN>'
40832
+ ```
40833
+
40834
+ <figure><img src="../../../.gitbook/assets/rest_api_step_1.png" alt=""><figcaption><p>Identifying the pipeline ID</p></figcaption></figure>
40835
+
40836
+ We can take the ID from any object in the list of responses. In this case,
40837
+ the <PIPELINE_ID> is `c953985e-650a-4cbf-a03a-e49463f58473` in the response.
40838
+
40839
+ After this, we take the pipeline ID and call the`/run_templates?pipeline_id=<PIPELINE_ID>` API:
40840
+
40841
+ ```shell
40842
+ curl -X 'GET' \
40843
+ '<YOUR_ZENML_SERVER_URL>/api/v1/run_templates?hydrate=false&logical_operator=and&page=1&size=20&pipeline_id=b826b714-a9b3-461c-9a6e-1bde3df3241d' \
40844
+ -H 'accept: application/json' \
40845
+ -H 'Authorization: Bearer <YOUR_TOKEN>'
40846
+ ```
40847
+
40848
+ We can now take the <TEMPLATE_ID> from this response. Here it is `b826b714-a9b3-461c-9a6e-1bde3df3241d`.
40849
+
40850
+ <figure><img src="../../../.gitbook/assets/rest_api_step_2.png" alt=""><figcaption><p>Identifying the template ID</p></figcaption></figure>
40851
+
40852
+ Finally, we can use the template ID to trigger the pipeline with a different
40853
+ configuration:
40854
+
40855
+ ```shell
40856
+ curl -X 'POST' \
40857
+ '<YOUR_ZENML_SERVER_URL>/api/v1/run_templates/b826b714-a9b3-461c-9a6e-1bde3df3241d/runs' \
40858
+ -H 'accept: application/json' \
40859
+ -H 'Content-Type: application/json' \
40860
+ -H 'Authorization: Bearer <YOUR_TOKEN>' \
40861
+ -d '{
40862
+ "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}}
40863
+ }'
40864
+ ```
40865
+
40866
+ A positive response means your pipeline has been re-triggered with a
40867
+ different config!
40868
+
40869
+ <!-- For scarf -->
40870
+ <figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure>
40871
+
40872
  ================
40873
  File: docs/book/how-to/debug-and-solve-issues.md
40874
  ================
 
48781
  * [Name your pipeline runs](how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md)
48782
  * [Tag your pipeline runs](how-to/pipeline-development/build-pipelines/tag-your-pipeline-runs.md)
48783
  * [Use failure/success hooks](how-to/pipeline-development/build-pipelines/use-failure-success-hooks.md)
48784
+ * [Fan in, fan out](how-to/pipeline-development/build-pipelines/fan-in-fan-out.md)
48785
  * [Hyperparameter tuning](how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md)
48786
  * [Access secrets in a step](how-to/pipeline-development/build-pipelines/access-secrets-in-a-step.md)
48787
  * [Run an individual step](how-to/pipeline-development/build-pipelines/run-an-individual-step.md)
 
48790
  * [Develop locally](how-to/pipeline-development/develop-locally/README.md)
48791
  * [Use config files to develop locally](how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md)
48792
  * [Keep your pipelines and dashboard clean](how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md)
 
 
 
 
 
48793
  * [Use configuration files](how-to/pipeline-development/use-configuration-files/README.md)
48794
  * [How to configure a pipeline with a YAML](how-to/pipeline-development/use-configuration-files/how-to-use-config.md)
48795
  * [What can be configured](how-to/pipeline-development/use-configuration-files/what-can-be-configured.md)
 
48805
  * [Configure Python environments](how-to/pipeline-development/configure-python-environments/README.md)
48806
  * [Handling dependencies](how-to/pipeline-development/configure-python-environments/handling-dependencies.md)
48807
  * [Configure the server environment](how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md)
48808
+ * [Trigger a pipeline](how-to/trigger-pipelines/README.md)
48809
+ * [Use templates: Python SDK](how-to/trigger-pipelines/use-templates-python.md)
48810
+ * [Use templates: CLI](how-to/trigger-pipelines/use-templates-cli.md)
48811
+ * [Use templates: Dashboard](how-to/trigger-pipelines/use-templates-dashboard.md)
48812
+ * [Use templates: Rest API](how-to/trigger-pipelines/use-templates-rest-api.md)
48813
  * [Customize Docker builds](how-to/customize-docker-builds/README.md)
48814
  * [Docker settings on a pipeline](how-to/customize-docker-builds/docker-settings-on-a-pipeline.md)
48815
  * [Docker settings on a step](how-to/customize-docker-builds/docker-settings-on-a-step.md)