diff --git "a/how-to-guides.txt" "b/how-to-guides.txt" --- "a/how-to-guides.txt" +++ "b/how-to-guides.txt" @@ -1,37 +1,39 @@ === File: docs/book/introduction.md === -# ZenML Overview +# ZenML Documentation Summary -**ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines by decoupling infrastructure from code, facilitating collaboration among developers. +**ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines. It separates infrastructure from code, enhancing collaboration among developers. -## For MLOps Platform Engineers +## Key Features + +### For MLOps Platform Engineers - **ZenML Pro**: Offers a managed instance with features like CI/CD, Model Control Plane, and RBAC. -- **Self-hosted Deployment**: Deploy on any cloud provider using Terraform. +- **Self-hosted Deployment**: Deploy ZenML on any cloud provider using Terraform. ```bash zenml stack register --provider aws zenml stack deploy --provider gcp ``` -- **Standardization**: Register environments as ZenML stacks for consistent ML workflows. +- **Standardization**: Register staging and production environments as ZenML stacks for consistent ML workflows. ```bash zenml orchestrator register kfp_orchestrator -f kubeflow zenml stack register production --orchestrator kubeflow ... ``` -- **No Vendor Lock-In**: Switch between cloud providers easily. +- **No Vendor Lock-In**: Easily switch between cloud providers. ```bash zenml stack set gcp python run.py # Run in GCP zenml stack set aws - python run.py # Now in AWS + python run.py # Run in AWS ``` -## For Data Scientists -- **Local Development**: Start developing locally and switch to production without code changes. +### For Data Scientists +- **Local Development**: Develop ML models locally and switch to production seamlessly. ```bash - python run.py # Develop locally + python run.py # Local development zenml stack set production - python run.py # Run in production + python run.py # Production run ``` -- **Pythonic SDK**: Use decorators to create ZenML pipelines. +- **Pythonic SDK**: Use decorators to convert Python functions into ZenML pipelines. ```python from zenml import pipeline, step @@ -41,125 +43,133 @@ @step def step_2(input_one: str, input_two: str) -> None: - print(f"{input_one} {input_two}") + print(input_one + ' ' + input_two) @pipeline def my_pipeline(): - output_step_one = step_1() - step_2(input_one="hello", input_two=output_step_one) - + step_2(input_one="hello", input_two=step_1()) + my_pipeline() ``` -- **Automatic Metadata Tracking**: Tracks metadata and versions datasets/models automatically. +- **Automatic Metadata Tracking**: ZenML tracks metadata and versions datasets and models. -## For ML Engineers -- **ML Lifecycle Management**: Manage ML workflows and environments efficiently. +### For ML Engineers +- **ML Lifecycle Management**: Manage ML workflows and environments easily. ```bash zenml stack set staging python run.py # Test on staging zenml stack set production python run.py # Run in production ``` -- **Reproducibility**: Automatically tracks and versions all components. -- **Automated Deployments**: Define workflows as ZenML pipelines for easy deployment. +- **Reproducibility**: Automatically track and version all components for easy result reproduction. +- **Automated Deployments**: Define workflows as ZenML pipelines for automatic deployment to services like Seldon. ```python from zenml.integrations.seldon.steps import seldon_model_deployer_step @pipeline def my_pipeline(): - data = data_loader_step() - model = model_trainer_step(data) + model = model_trainer_step(data_loader_step()) seldon_model_deployer_step(model) ``` ## Additional Resources -- **For MLOps Engineers**: [Switch to production](user-guide/production-guide/cloud-orchestration.md), [Component guide](./component-guide/README.md), [FAQ](reference/faq.md). -- **For Data Scientists**: [Core Concepts](getting-started/core-concepts.md), [Starter Guide](user-guide/starter-guide/), [Quickstart in Colab](https://colab.research.google.com/github/zenml-io/zenml/blob/main/examples/quickstart/notebooks/quickstart.ipynb). -- **For ML Engineers**: [Starter Guide](user-guide/starter-guide/), [How To](./how-to/pipeline-development/build-pipelines/README.md), [Examples](https://github.com/zenml-io/zenml-projects). +- **For MLOps Engineers**: [ZenML Pro](getting-started/zenml-pro/README.md), [Cloud Orchestration Guide](user-guide/production-guide/cloud-orchestration.md) +- **For Data Scientists**: [Core Concepts](getting-started/core-concepts.md), [Starter Guide](user-guide/starter-guide/) +- **For ML Engineers**: [How To](./how-to/pipeline-development/build-pipelines/README.md), [Examples](https://github.com/zenml-io/zenml-projects) + +Explore more at [ZenML Live Demo](https://www.zenml.io/live-demo). ================================================== === File: docs/book/user-guide/starter-guide/track-ml-models.md === -## Summary of ZenML Model Control Plane Documentation +### Summary of ZenML Model Control Plane Documentation -### Overview -ZenML's Model Control Plane (MCP) allows users to manage machine learning models, which consist of pipelines, artifacts, and metadata. A ZenML Model encapsulates the business logic of an ML product and is treated as a first-class entity within the ZenML ecosystem. +#### Overview of ZenML Model +- A **ZenML Model** is an entity that groups pipelines, artifacts, metadata, and business data, representing the business logic of an ML product. +- Models are central to ZenML and can be managed via the ZenML API, client, or ZenML Pro dashboard. -### Key Concepts -- **Model**: Represents a unified entity grouping pipelines, artifacts, and metadata. It includes technical models (files with weights and parameters), training data, and predictions. -- **Model Versions**: Each model can have multiple versions, representing iterations of the model. +#### Key Features +- **Model Versions**: Each model can have multiple versions, allowing for tracking of iterations. +- **Artifacts**: Associated artifacts include technical models, training data, and predictions. -### Viewing Models +#### Viewing Models - **CLI**: Use `zenml model list` to list all models. -- **ZenML Pro Dashboard**: Offers visualization capabilities for models. - -### Configuring Models in Pipelines -Models can be linked to pipelines either at the pipeline or step level. This ensures all artifacts generated during the pipeline run are associated with the specified model. +- **Dashboard**: The ZenML Pro dashboard provides visualization capabilities for models. -#### Example Code +#### Configuring Models in Pipelines +- Models can be linked to pipelines, ensuring all generated artifacts are associated with the specified model. + +**Example Code:** ```python from zenml import pipeline, Model -model = Model(name="iris_classifier", version=None, license="Apache 2.0", description="A classification model for the iris dataset.") +model = Model(name="iris_classifier", version=None) @pipeline(model=model) def training_pipeline(gamma: float = 0.002): - X_train, X_test, y_train, y_test = training_data_loader() + X_train, y_train = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ``` -### Fetching Models in Pipelines -Models can be accessed within steps using `get_step_context()` or `get_pipeline_context()`. +#### Fetching Models in Pipelines +- Models can be accessed via `get_step_context()` or `get_pipeline_context()`. -#### Example Code +**Example Code:** ```python from zenml import get_step_context, step, pipeline @step -def svc_trainer(X_train, y_train, gamma=0.001): +def svc_trainer(X_train, y_train): model = get_step_context().model + ... @pipeline(model=Model(name="iris_classifier", version="production")) -def training_pipeline(gamma: float = 0.002): +def training_pipeline(): model = get_pipeline_context().model ``` -### Logging Metadata -Models can log metadata using the `log_model_metadata` method, which allows capturing key-value pairs for model performance metrics. +#### Logging Metadata +- Metadata can be logged to models using `log_model_metadata`. -#### Example Code +**Example Code:** ```python from zenml import get_step_context, step, log_model_metadata @step -def svc_trainer(X_train, y_train, gamma=0.001): +def svc_trainer(X_train, y_train): model = get_step_context().model log_model_metadata(model_name="iris_classifier", metadata={"accuracy": float(accuracy)}) ``` -### Model Stages -Models can exist in different stages: -- **staging**: Ready for production. -- **production**: Actively used in production. -- **latest**: Most recent version. -- **archived**: No longer relevant. +#### Retrieving Metadata +- Metadata can be retrieved using the ZenML client. -#### Example Code +**Example Code:** +```python +from zenml.client import Client +model = Client().get_model_version("iris_classifier") +print(model.run_metadata["accuracy"].value) +``` + +#### Model Stages +- Models can exist in stages: `staging`, `production`, `latest`, and `archived`. + +**Example Code for Stage Management:** ```python model = Model(name="iris_classifier", version="latest") model.set_stage(stage="production", force=True) ``` -### CLI Commands for Model Stages +#### CLI Commands for Model Stages - List staging models: `zenml model version list --stage staging` - Update to production: `zenml model version update -s production` -### Conclusion -ZenML's Model Control Plane provides robust features for managing ML models and their lifecycle, enhancing traceability and reproducibility in ML workflows. For more details, refer to the [Model Management guide](../../how-to/model-management-metrics/model-control-plane/README.md). +#### Conclusion +ZenML's Model Control Plane allows for effective management of ML models, their versions, and associated metadata, enhancing traceability and reproducibility in ML workflows. For more details, refer to the dedicated Model Management guide. ================================================== @@ -167,51 +177,56 @@ ZenML's Model Control Plane provides robust features for managing ML models and ### ZenML Artifact Management Overview -ZenML automates the versioning and management of artifacts—data, models, and evaluations—essential for reproducibility in machine learning workflows. This guide covers how to efficiently name, organize, and utilize data within the ZenML framework. +ZenML automates the versioning and management of artifacts—data, models, and evaluations—within machine learning workflows, ensuring reproducibility and traceability. #### Managing Artifacts -1. **Artifact Naming**: Use the `Annotated` object to assign human-readable names to outputs for better discoverability. - ```python - from typing_extensions import Annotated - import pandas as pd - from sklearn.datasets import load_iris - from zenml import pipeline, step +- **Artifact Naming**: Use the `Annotated` object to assign human-readable names to outputs for better discoverability. + + ```python + from typing_extensions import Annotated + import pandas as pd + from sklearn.datasets import load_iris + from zenml import pipeline, step - @step - def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: - return load_iris(as_frame=True).get("frame") + @step + def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: + iris = load_iris(as_frame=True) + return iris.get("frame") - @pipeline - def feature_engineering_pipeline(): - training_data_loader() - ``` + @pipeline + def feature_engineering_pipeline(): + training_data_loader() + ``` -2. **Default Naming**: Unspecified outputs default to `{pipeline_name}::{step_name}::output`. Custom names enhance visual exploration in the ZenML dashboard. +- **Default Naming**: Unnamed outputs default to `{pipeline_name}::{step_name}::output`. -3. **Manual Versioning**: ZenML auto-versions artifacts, but you can specify custom versions using `ArtifactConfig`. - ```python - from zenml import step, ArtifactConfig +- **Versioning**: ZenML auto-increments artifact versions. Custom versions can be specified using `ArtifactConfig`. - @step - def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: - ... - ``` + ```python + from zenml import step, ArtifactConfig -4. **Metadata and Tags**: Extend artifacts with metadata or tags using either the `ArtifactConfig` or `get_step_context()` methods. - ```python - @step - def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"key": "value"}, tags=["tag_name"])]: - ... - ``` + @step + def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: + ... + ``` + +- **Metadata and Tags**: Extend artifacts with metadata and tags using `ArtifactConfig` or `get_step_context()`. + + ```python + @step + def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"key": "value"}, tags=["tag"])]: + return "string" + ``` #### Comparing Metadata (Pro Feature) -The ZenML Pro dashboard provides an Experiment Comparison tool to visualize and analyze metadata across runs, offering both Table and Parallel Coordinates views for insights into pipeline behavior. +ZenML Pro offers an Experiment Comparison tool to visualize metadata across runs in two views: **Table View** (structured comparison) and **Parallel Coordinates View** (relationship identification). #### Artifact Types -Assign types to artifacts to enhance filtering and visualization in the dashboard. +Specify artifact types to enhance dashboard visibility and filtering: + ```python from zenml import ArtifactConfig, step from zenml.enums import ArtifactType @@ -221,9 +236,10 @@ def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactT return MyCustomModel(...) ``` -#### External Artifacts +#### Consuming External Artifacts + +Use `ExternalArtifact` to integrate data not produced by ZenML, such as from external sources: -Use `ExternalArtifact` to consume data not produced by ZenML, such as data from external sources. ```python from zenml import ExternalArtifact, pipeline, step @@ -237,41 +253,72 @@ def printing_pipeline(): print_data(data=data) ``` -#### Consuming Artifacts from Other Pipelines +#### Fetching Artifacts from Other Pipelines + +Utilize the `Client` to fetch artifacts by ID, name, or version within a pipeline: -Utilize the `Client` to fetch artifacts from previous runs. ```python from zenml.client import Client -client = Client() -dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") +@step +def trainer(dataset: pd.DataFrame): + ... + +@pipeline +def training_pipeline(): + client = Client() + dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") + trainer(dataset=dataset_artifact) ``` #### Managing External Artifacts -You can save predictions or other artifacts generated outside ZenML. +You can save predictions or other artifacts created outside ZenML: + ```python +from zenml.client import Client, save_artifact + +model = ... prediction = model.predict([[1, 1, 1, 1]]) save_artifact(prediction, name="iris_predictions") ``` #### Linking Existing Data -Link existing data as ZenML artifacts to avoid redundancy. +Link external data as ZenML artifacts: + ```python +from zenml.client import Client, register_artifact +from pytorch_lightning import Trainer + +prefix = Client().active_stack.artifact_store.path +default_root_dir = os.path.join(prefix, uuid4().hex) + +trainer = Trainer(default_root_dir=default_root_dir) +trainer.fit(model) + register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` #### Logging Metadata -Associate metadata with artifacts for better tracking. +Log metadata for artifacts using `log_artifact_metadata`: + ```python -log_artifact_metadata(artifact_name="my_model", metadata={"accuracy": float(accuracy)}) +from zenml import step, log_artifact_metadata + +@step +def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: + model.fit(dataset[0], dataset[1]) + accuracy = model.score(dataset[0], dataset[1]) + log_artifact_metadata(metadata={"accuracy": float(accuracy)}) + return model ``` ### Code Example -Here’s a concise example demonstrating the key functionalities: +A complete example demonstrating the above concepts: + ```python from typing import Optional, Tuple from typing_extensions import Annotated @@ -289,7 +336,8 @@ def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], Art @step def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: model.fit(dataset[0], dataset[1]) - log_artifact_metadata(metadata={"accuracy": float(model.score(dataset[0], dataset[1]))}) + accuracy = model.score(dataset[0], dataset[1]) + log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline @@ -300,15 +348,19 @@ def model_finetuning_pipeline(dataset_version: Optional[str] = None): model_finetuner_step(model=model, dataset=dataset) def main(): - save_artifact(SVC(gamma=0.001), name="my_model", version="1") + untrained_model = SVC(gamma=0.001) + save_artifact(untrained_model, name="my_model", version="1") model_finetuning_pipeline() model_finetuning_pipeline(dataset_version="1") + latest_trained_model = load_artifact("my_model") + old_dataset = load_artifact("my_dataset", version="1") + latest_trained_model.predict(old_dataset[0]) if __name__ == "__main__": main() ``` -This example illustrates the creation, versioning, and metadata logging of artifacts within a ZenML pipeline. For more details, refer to the [ZenML documentation](https://sdkdocs.zenml.io/latest). +This code demonstrates the creation and management of datasets and models, including versioning and metadata logging. ================================================== @@ -316,64 +368,62 @@ This example illustrates the creation, versioning, and metadata logging of artif ### Summary of ZenML Documentation on Creating ML Pipelines -#### Overview -ZenML facilitates the creation of modular and scalable machine learning (ML) pipelines by decoupling stages such as data ingestion, preprocessing, and model evaluation into **Steps** that can be integrated into an end-to-end **Pipeline**. This structure enhances reproducibility and efficiency in ML workflows. +**Overview:** +ZenML simplifies the creation of production-ready ML pipelines by decoupling stages such as data ingestion, preprocessing, and model evaluation into modular **Steps** that can be integrated into an end-to-end **Pipeline**. This structure enhances manageability, reusability, and scalability. -#### Installation -To begin using ZenML, install it using: +**Installation:** +To get started with ZenML, install it using: ```shell pip install "zenml[server]" zenml login --local # Launches the dashboard locally ``` -#### Simple ML Pipeline Example -A basic ML pipeline can be created as follows: - -```python -from zenml import pipeline, step - -@step -def load_data() -> dict: - training_data = [[1, 2], [3, 4], [5, 6]] - labels = [0, 1, 0] - return {'features': training_data, 'labels': labels} +**Simple ML Pipeline Example:** +A basic pipeline can be created with the following components: -@step -def train_model(data: dict) -> None: - total_features = sum(map(sum, data['features'])) - total_labels = sum(data['labels']) - print(f"Trained model using {len(data['features'])} data points. Feature sum is {total_features}, label sum is {total_labels}") +1. **Load Data Step:** + ```python + @step + def load_data() -> dict: + training_data = [[1, 2], [3, 4], [5, 6]] + labels = [0, 1, 0] + return {'features': training_data, 'labels': labels} + ``` -@pipeline -def simple_ml_pipeline(): - dataset = load_data() - train_model(dataset) +2. **Train Model Step:** + ```python + @step + def train_model(data: dict) -> None: + total_features = sum(map(sum, data['features'])) + total_labels = sum(data['labels']) + print(f"Trained model using {len(data['features'])} data points. " + f"Feature sum is {total_features}, label sum is {total_labels}") + ``` -if __name__ == "__main__": - run = simple_ml_pipeline() -``` +3. **Pipeline Definition:** + ```python + @pipeline + def simple_ml_pipeline(): + dataset = load_data() + train_model(dataset) + ``` -#### Running the Pipeline -Execute the pipeline with: -```bash -$ python run.py -``` -This will initiate the pipeline and provide a summary of the execution in the terminal. +4. **Execution:** + ```python + if __name__ == "__main__": + run = simple_ml_pipeline() + ``` -#### Dashboard Exploration -After execution, view results in the ZenML Dashboard by running: -```bash -zenml login --local -``` -Access the dashboard at [http://127.0.0.1:8237/](http://127.0.0.1:8237/) and log in with the username **"default"**. +**Dashboard Exploration:** +After running the pipeline, use `zenml login --local` to access the ZenML Dashboard at [http://127.0.0.1:8237/](http://127.0.0.1:8237/). Log in with the username **"default"** to view execution history and artifacts. -#### Understanding Steps and Artifacts -Each function in the pipeline is a **step** connected by **artifacts**, which are the outputs of steps fed into downstream steps. ZenML automatically tracks and versions these artifacts and their associated configurations. +**Understanding Steps and Artifacts:** +Each function executed in the pipeline is represented as a `step` in a Directed Acyclic Graph (DAG). Artifacts are the outputs from these steps, which ZenML automatically tracks and versions. -#### Expanding to a Full ML Workflow -To create a more complex pipeline using the Iris dataset and a Support Vector Classifier (SVC): +**Expanding to a Full ML Workflow:** +To create a more complex workflow using the Iris dataset and a Support Vector Classifier (SVC): -1. **Import Required Libraries**: +1. **Imports:** ```python from typing_extensions import Annotated, Tuple import pandas as pd @@ -383,42 +433,39 @@ To create a more complex pipeline using the Iris dataset and a Support Vector Cl from zenml import pipeline, step ``` -2. **Install Requirements**: - ```bash - pip install matplotlib - zenml integration install sklearn -y - ``` - -3. **Define Data Loader**: +2. **Data Loader Step:** ```python @step - def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: + def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], ...]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) ``` -4. **Create Training Step**: +3. **Training Step:** ```python @step - def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: + def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], ...]: model = SVC(gamma=gamma) - model.fit(X_train, y_train) - return model, model.score(X_train, y_train) + model.fit(X_train.to_numpy(), y_train.to_numpy()) + return model, model.score(X_train.to_numpy(), y_train.to_numpy()) ``` -5. **Combine Steps into a Pipeline**: +4. **Pipeline Definition:** ```python @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) + ``` +5. **Execution:** + ```python if __name__ == "__main__": training_pipeline() ``` -#### YAML Configuration -Pipelines can also be configured using a YAML file: +**Configuration with YAML:** +You can configure pipeline runs using a YAML file: ```python training_pipeline = training_pipeline.with_options(config_path='/local/path/to/config.yaml') training_pipeline() @@ -428,15 +475,15 @@ A simple YAML configuration might look like: parameters: gamma: 0.01 ``` -Generate a template for the configuration file using: +Generate a template config file with: ```python training_pipeline.write_run_configuration_template(path='/local/path/to/config.yaml') ``` -#### Full Code Example -The complete code for the Iris dataset pipeline is provided in the documentation, which integrates all the steps and configurations discussed. +**Complete Code Example:** +The full code for the Iris dataset SVC pipeline is provided in the documentation, combining all the steps and configurations discussed. -This summary retains key technical details and code snippets necessary for understanding and implementing ZenML pipelines. +This summary encapsulates the essential components and functionalities of ZenML for creating and managing ML pipelines while ensuring clarity and conciseness. ================================================== @@ -444,39 +491,50 @@ This summary retains key technical details and code snippets necessary for under ### Summary of ZenML Caching Documentation -**Overview**: ZenML enhances the development of machine learning pipelines through step caching, which reuses outputs from previous runs when inputs, parameters, or code remain unchanged. +**Overview:** +ZenML facilitates rapid development of machine learning pipelines through step caching, which reuses outputs from previous runs when inputs, parameters, or code remain unchanged. Caching is enabled by default, allowing for efficient execution, especially when running pipelines without a schedule. -**Caching Behavior**: -- **Default Caching**: ZenML caches outputs by default, saving time and resources by avoiding unnecessary re-execution of steps. -- **Artifact Store**: Cached outputs are stored in the artifact store. -- **Client-Side Caching**: When running pipelines without a schedule, caching is computed on the client machine. To enforce dynamic caching on the orchestrator, set `ZENML_PREVENT_CLIENT_SIDE_CACHING=True`. -- **Manual Control**: Caching does not automatically detect changes in external inputs or file systems. Use `enable_cache=False` for steps that rely on such changes. +**Key Points:** +- **Caching Behavior:** ZenML automatically caches the outputs of steps unless there are changes in inputs, parameters, or code. This is beneficial for saving time and resources during remote executions. +- **Manual Caching Control:** Users must manually disable caching for steps reliant on external inputs or file-system changes using `@step(enable_cache=False)`. -**Configuring Caching**: -1. **Pipeline Level**: Set caching behavior in the `@pipeline` decorator. - ```python - @pipeline(enable_cache=False) - def first_pipeline(...): - """Pipeline with cache disabled""" - ``` -2. **Runtime Configuration**: Override caching settings at runtime using `with_options`. - ```python - first_pipeline = first_pipeline.with_options(enable_cache=False) - ``` -3. **Step Level**: Control caching for individual steps using the `@step` decorator. - ```python - @step(enable_cache=False) - def import_data_from_api(...): - """Always run this step""" - ``` +**Configuring Caching:** +1. **Pipeline Level:** + - Set caching behavior in the `@pipeline` decorator: + ```python + @pipeline(enable_cache=False) + def first_pipeline(...): + ... + ``` + - This disables caching for all steps unless overridden at the step level. + +2. **Runtime Configuration:** + - Override caching settings at runtime: + ```python + first_pipeline = first_pipeline.with_options(enable_cache=False) + ``` + +3. **Step Level:** + - Configure caching for individual steps: + ```python + @step(enable_cache=False) + def import_data_from_api(...): + ... + ``` + - Use `with_options` for dynamic control: + ```python + import_data_from_api = import_data_from_api.with_options(enable_cache=False) + ``` + +**Example Code:** +The following code demonstrates caching in a simple ZenML pipeline: -**Code Example**: -A simplified example demonstrates caching in action: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split +from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.logger import get_logger @@ -484,9 +542,9 @@ from zenml.logger import get_logger logger = get_logger(__name__) @step -def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], ...]: +def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) - return train_test_split(iris.data, iris.target, test_size=0.2) + return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: @@ -504,13 +562,13 @@ if __name__ == "__main__": logger.info("First step cached, second not due to parameter change") training_pipeline(gamma=0.0001) svc_trainer = svc_trainer.with_options(enable_cache=False) - logger.info("Caching disabled for the second step") + logger.info("First step cached, second not due to settings") training_pipeline() logger.info("Caching disabled for the entire pipeline") training_pipeline.with_options(enable_cache=False)() ``` -This example illustrates how caching works, including scenarios where it is enabled or disabled based on parameters and settings. +This example illustrates how caching works in ZenML, including how to enable and disable it at various levels. ================================================== @@ -522,7 +580,7 @@ This documentation outlines a simple starter project to apply foundational MLOps #### Getting Started -1. **Set Up Environment**: Create a fresh virtual environment and install dependencies: +1. **Set Up Environment**: Create a fresh virtual environment and install necessary dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y @@ -536,7 +594,7 @@ This documentation outlines a simple starter project to apply foundational MLOps pip install -r requirements.txt ``` - **Alternative Method**: Clone the ZenML MLOps starter example if the above steps fail: + **Alternative Setup**: If the above steps fail, clone the ZenML MLOps starter example: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter @@ -549,11 +607,11 @@ This documentation outlines a simple starter project to apply foundational MLOps By following the project, you will execute three key pipelines: - **Feature Engineering Pipeline**: Loads and prepares data for training. - **Training Pipeline**: Trains a model using the preprocessed dataset. -- **Batch Inference Pipeline**: Makes predictions on new data with the trained model. +- **Batch Inference Pipeline**: Runs predictions on new data with the trained model. #### Next Steps -Experiment with ZenML to solidify your understanding. When ready, proceed to the [production guide](../production-guide/) for advanced topics. +After completing the project, consider introducing the ZenML starter template to your team to leverage a standardized MLOps framework. For further learning, experiment with ZenML and proceed to the [production guide](../production-guide/) for advanced topics. ================================================== @@ -561,118 +619,112 @@ Experiment with ZenML to solidify your understanding. When ready, proceed to the # ZenML Starter Guide Summary -The ZenML Starter Guide is designed for MLOps engineers and data scientists to build robust ML platforms using the ZenML framework. It provides foundational knowledge and tools to manage machine learning operations effectively. +The ZenML Starter Guide is designed for MLOps engineers and data scientists looking to build robust ML platforms. It provides foundational knowledge of the ZenML framework and tools for managing machine learning operations. -## Key Topics Covered: -- **Creating Your First ML Pipeline**: Step-by-step instructions to set up an ML pipeline. -- **Understanding Caching Between Pipeline Steps**: Techniques for optimizing pipeline performance through caching. -- **Managing Data and Data Versioning**: Best practices for handling data and maintaining version control. -- **Tracking Your Machine Learning Models**: Methods for monitoring and managing ML models throughout their lifecycle. +### Key Topics Covered: +- **Creating Your First ML Pipeline**: Learn to set up and execute a basic ML pipeline. +- **Understanding Caching**: Explore how to cache results between pipeline steps for efficiency. +- **Managing Data and Versioning**: Understand data management and version control. +- **Tracking ML Models**: Learn to track and manage machine learning models effectively. -## Prerequisites: -- A Python environment is required. -- Install `virtualenv` for an easier setup. +### Prerequisites: +- A Python environment. +- `virtualenv` installed. -By the end of the guide, users will complete a starter project, establishing a foundational understanding of MLOps with ZenML. Prepare your development environment and begin your journey into MLOps! +By the end of the guide, users will complete a starter project, marking their entry into MLOps with ZenML. Prepare your development environment and begin your journey! ================================================== === File: docs/book/user-guide/production-guide/ci-cd.md === -# Managing the Lifecycle of a ZenML Pipeline with CI/CD - -## Overview -To enhance the deployment of ZenML pipelines, it's beneficial to integrate Continuous Integration (CI) and Continuous Delivery (CD) through a central workflow engine. This allows for local experimentation and automated testing and validation of code changes via a pull request/merge request process, ultimately leading to automatic deployment in production. - -## Setting Up CI/CD with GitHub Actions -To implement CI/CD, we will use GitHub Actions within a GitHub repository. For a practical example, refer to the [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/), which demonstrates CI/CD automation for machine learning. - -### Step 1: Configure an API Key in ZenML -Create an API key for machine-to-machine communication: - -```bash -zenml service-account create github_action_api_key -``` - -This command will generate an API key that must be securely stored, as it will not be shown again. +### Managing the Lifecycle of a ZenML Pipeline with CI/CD -### Step 2: Set Up Secrets in GitHub -Store the `ZENML_API_KEY` in your GitHub repository secrets. This allows the GitHub Actions workflow to access the key securely. +#### Overview +This documentation outlines the setup of Continuous Integration and Delivery (CI/CD) for ZenML pipelines, transitioning from local execution to a centralized workflow engine integrated with GitHub Actions. This enables automated testing and deployment of code changes after peer review. -### Step 3: (Optional) Configure Staging and Production Stacks -You can set up different stacks for staging and production environments to manage resources separately. This may involve using different data sources or configuration files for each environment. +#### Key Steps to Set Up CI/CD -### Step 4: Trigger a Pipeline on Pull Requests -To ensure code quality, set up a GitHub Action that runs your pipeline when changes are made. Configure the workflow to trigger on pull requests: +1. **Configure an API Key in ZenML** + - Create an API key for machine-to-machine connections: + ```bash + zenml service-account create github_action_api_key + ``` + - Store the generated API key securely as it will not be displayed again. -```yaml -on: - pull_request: - branches: [staging, main] -``` +2. **Set Up Secrets in GitHub** + - Store the `ZENML_API_KEY` in GitHub secrets for use in GitHub Actions. Refer to [GitHub documentation](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository) for details. -### Step 5: Define Workflow Jobs -Set essential environment variables in your workflow: +3. **(Optional) Configure Staging and Production Stacks** + - Use different stacks for staging and production if needed. This may involve different data sources or configuration files for various environments. -```yaml -jobs: - run-staging-workflow: - runs-on: run-zenml-pipeline - env: - ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} - ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} - ZENML_STACK: stack_name - ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} - ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} -``` +4. **Trigger a Pipeline on Pull Requests** + - Set up a GitHub Action workflow to run the pipeline automatically on code changes. Use the following configuration to trigger on pull requests: + ```yaml + on: + pull_request: + branches: [ staging, main ] + ``` -### Step 6: Install Dependencies and Run the Pipeline -Include steps to check out the code, set up Python, install requirements, connect to the ZenML server, set the active stack, and run the pipeline: +5. **Define Job Steps in the Workflow** + - Here’s a simplified version of the job configuration: + ```yaml + jobs: + run-staging-workflow: + runs-on: run-zenml-pipeline + env: + ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} + ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} + ZENML_STACK: stack_name + ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} + ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} + ``` -```yaml -steps: - - name: Check out repository code - uses: actions/checkout@v3 +6. **Install Requirements and Run the Pipeline** + - Include the following steps in your workflow: + ```yaml + steps: + - name: Check out repository code + uses: actions/checkout@v3 - - uses: actions/setup-python@v4 - with: - python-version: '3.9' + - uses: actions/setup-python@v4 + with: + python-version: '3.9' - - name: Install requirements - run: pip3 install -r requirements.txt + - name: Install requirements + run: pip3 install -r requirements.txt - - name: Confirm ZenML client connection - run: zenml status + - name: Confirm ZenML client is connected + run: zenml status - - name: Set stack - run: zenml stack set ${{ env.ZENML_STACK }} + - name: Set stack + run: zenml stack set ${{ env.ZENML_STACK }} - - name: Run pipeline - run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} -``` + - name: Run pipeline + run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} + ``` -### Step 7: (Optional) Comment Metrics on Pull Requests -You can configure the workflow to leave a report on the pull request based on the pipeline run. Refer to the [template](https://github.com/zenml-io/zenml-gitflow/blob/main/.github/workflows/pipeline_run.yaml#L87-L99) for implementation details. +7. **(Optional) Comment Metrics on the Pull Request** + - Configure the workflow to leave a report on the pull request based on the pipeline results. Refer to the template in the ZenML Gitflow repository for implementation. -This setup ensures that code changes are tested and validated before being merged into the main branch, enhancing the reliability of your ZenML pipelines in production. +This setup ensures that only validated code is deployed to production, enhancing the reliability of the CI/CD process for ZenML pipelines. ================================================== === File: docs/book/user-guide/production-guide/remote-storage.md === -### Transitioning to Remote Artifact Storage +### Summary: Transitioning to Remote Artifact Storage #### Overview -Transitioning to remote artifact storage enhances collaboration and scalability in MLOps by allowing artifacts to be stored in the cloud, making them accessible from anywhere with the right permissions. +Transitioning to remote artifact storage enhances collaboration and scalability for production workloads by storing artifacts in the cloud. This allows access from anywhere with appropriate permissions. #### Connecting Remote Storage -When using remote storage, the main change is that artifacts are stored centrally. +When using remote storage, the only change is that artifacts are stored in a central location. #### Provisioning and Registering Remote Artifact Stores -ZenML supports various cloud providers for artifact storage. Below are instructions for major providers: +ZenML supports various artifact store flavors. Here’s how to set up on major cloud providers: -- **AWS S3** - 1. Install AWS CLI: [AWS CLI Documentation](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +- **AWS (S3)** + 1. Install AWS CLI. 2. Install ZenML S3 integration: ```shell zenml integration install s3 -y @@ -682,8 +734,8 @@ ZenML supports various cloud providers for artifact storage. Below are instructi zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name ``` -- **GCP GCS** - 1. Install Google Cloud CLI: [Google Cloud Documentation](https://cloud.google.com/sdk/docs/install-sdk). +- **GCP (GCS)** + 1. Install Google Cloud CLI. 2. Install ZenML GCP integration: ```shell zenml integration install gcp -y @@ -693,8 +745,8 @@ ZenML supports various cloud providers for artifact storage. Below are instructi zenml artifact-store register cloud_artifact_store -f gcp --path=gs://bucket-name ``` -- **Azure Blob Storage** - 1. Install Azure CLI: [Azure Documentation](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli). +- **Azure** + 1. Install Azure CLI. 2. Install ZenML Azure integration: ```shell zenml integration install azure -y @@ -744,64 +796,72 @@ zenml artifact-store connect cloud_artifact_store --connector cloud_connector python run.py --training-pipeline ``` -Artifacts will be stored in remote storage, making them accessible to your team. To list artifact versions: +Artifacts will be stored in remote storage, making them accessible to team members. You can list artifact versions: ```shell zenml artifact version list --created="gte:$(date -v-15M '+%Y-%m-%d %H:%M:%S')" ``` -By connecting remote storage, you enable a collaborative and scalable MLOps workflow, ensuring artifacts are part of a cloud ecosystem. +#### Conclusion +Using remote storage significantly enhances the collaborative and scalable aspects of MLOps workflows, allowing artifacts to be shared across the cloud-based ecosystem. ================================================== === File: docs/book/user-guide/production-guide/understand-stacks.md === -### Summary of ZenML Stack Documentation +# Summary of ZenML Stack Management Documentation -#### Overview of Stacks +## Overview of Stacks - A **stack** is the configuration of tools and infrastructure for running ZenML pipelines. By default, pipelines run on the `default` stack. -- ZenML acts as a translation layer, enabling code to run on any configured stack without modifying the code. +- ZenML acts as a translation layer between the code domain (user's Python code) and the infrastructure domain (stack). -#### Separation of Code and Infrastructure -- ZenML separates the code domain (user's Python code) from the infrastructure domain (stack configuration), allowing easy switching between environments. +## Key Concepts +- **Separation of Code and Configuration**: This allows easy switching of environments without altering code, enabling domain experts to work independently. +- **Active Stack**: The stack currently in use for running pipelines can be checked with `zenml stack describe` and listed with `zenml stack list`. -#### Default Stack -- Use `zenml stack describe` to view details about the active stack: - ```bash - zenml stack describe - ``` -- Use `zenml stack list` to see all registered stacks: - ```bash - zenml stack list - ``` +## Stack Components +1. **Orchestrator**: Executes pipeline code (default is a local Python thread). + - List orchestrators with `zenml orchestrator list`. + +2. **Artifact Store**: Stores outputs of pipeline steps. + - List artifact stores with `zenml artifact-store list`. -#### Stack Components -- A stack consists of at least an **orchestrator** (executes pipeline code) and an **artifact store** (persists step outputs). -- Additional components can include experiment trackers and model deployers. A **container registry** is crucial for storing containerized images. +3. **Additional Components**: Other components include experiment trackers, model deployers, and container registries. -#### Registering a Stack -1. **Create an Artifact Store**: - ```bash - zenml artifact-store register my_artifact_store --flavor=local - ``` -2. **Create a New Stack**: - ```bash - zenml stack register a_new_local_stack -o default -a my_artifact_store - ``` +## Registering a Stack +### Create an Artifact Store +```bash +zenml artifact-store register my_artifact_store --flavor=local +``` +- **Command Breakdown**: + - `artifact-store`: Top-level group for stack components. + - `register`: Command to create a new component. + - `my_artifact_store`: Unique name for the artifact store. + - `--flavor=local`: Specifies the implementation type. -#### Switching Stacks -- Use the VS Code extension to view and switch stacks easily. +### Create a Local Stack +```bash +zenml stack register a_new_local_stack -o default -a my_artifact_store +``` +- **Command Breakdown**: + - `stack`: CLI group for stack interactions. + - `register`: Command to create a new stack. + - `-o`: Specifies the orchestrator. + - `-a`: Specifies the artifact store. -#### Running a Pipeline on a New Stack +## Switching Stacks +- Use the ZenML VS Code extension to view and switch stacks easily. + +## Running a Pipeline on the New Stack 1. Set the new stack as active: ```bash zenml stack set a_new_local_stack ``` -2. Run the pipeline: +2. Execute the pipeline: ```bash python run.py --training-pipeline ``` -This documentation provides essential commands and concepts for managing stacks in ZenML, enabling users to configure and run machine learning workflows effectively. +This documentation provides essential commands and concepts for managing stacks in ZenML, enabling users to configure and switch their machine learning workflows efficiently. ================================================== @@ -810,22 +870,21 @@ This documentation provides essential commands and concepts for managing stacks ### Summary of ZenML Pipeline Configuration Documentation #### Overview -This documentation explains how to configure a ZenML pipeline to add compute resources and manage dependencies using a YAML configuration file. +This documentation explains how to configure a ZenML pipeline to add compute resources and manage dependencies through a YAML configuration file. -#### Key Components of Pipeline Configuration +#### Configuring the Pipeline +To configure the pipeline, the `run.py` script sets the configuration path and executes the training pipeline: -1. **Pipeline Configuration Script**: - - The script `run.py` sets the configuration path for the pipeline: - ```python - pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") - training_pipeline_configured = training_pipeline.with_options(**pipeline_args) - training_pipeline_configured() - ``` +```python +pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") +training_pipeline_configured = training_pipeline.with_options(**pipeline_args) +training_pipeline_configured() +``` -2. **YAML Configuration Breakdown**: - - The YAML file (`training_rf.yaml`) is used to configure the pipeline. +The YAML configuration file `training_rf.yaml` is essential for defining the pipeline's settings. - **Docker Settings**: +#### YAML Configuration Breakdown +1. **Docker Settings**: ```yaml settings: docker: @@ -834,9 +893,9 @@ This documentation explains how to configure a ZenML pipeline to add compute res requirements: - pyarrow ``` - - Specifies required libraries and integrations for the Docker container. + This section specifies required libraries for the Docker container, including `pyarrow` and `scikit-learn`. - **Model Association**: +2. **Model Association**: ```yaml model: name: breast_cancer_classifier @@ -845,53 +904,58 @@ This documentation explains how to configure a ZenML pipeline to add compute res description: A breast cancer classifier tags: ["breast_cancer", "classifier"] ``` - - Defines the model's metadata, which can be viewed via CLI or ZenML Pro dashboard. + This section associates a ZenML model with the pipeline. - **Parameters**: +3. **Parameters**: ```yaml parameters: model_type: "rf" # Choose between rf/sgd ``` - - Specifies the parameters expected by the pipeline. + This defines parameters expected by the pipeline, such as `model_type`. -3. **Scaling Compute Resources**: - - To adjust resource allocation, add the following to `training_rf.yaml`: - ```yaml - settings: - orchestrator: - memory: 32 # in GB - steps: - model_trainer: - settings: - orchestrator: - cpus: 8 - ``` - - This sets the memory for the entire pipeline and CPU cores for specific steps. - - **Azure Users**: - - For Kubernetes orchestrators, the configuration differs slightly: - ```yaml - settings: - resources: - memory: "32GB" - steps: - model_trainer: - settings: - resources: - memory: "8GB" - ``` +#### Scaling Compute Resources +To adjust resource requirements, add the following to `training_rf.yaml`: + +```yaml +settings: + orchestrator: + memory: 32 # in GB + +steps: + model_trainer: + settings: + orchestrator: + cpus: 8 +``` +This configures the entire pipeline with 32 GB of memory and 8 CPU cores for the model trainer step. + +##### Azure Users +For Azure with Kubernetes, the configuration should be: + +```yaml +settings: + resources: + memory: "32GB" + +steps: + model_trainer: + settings: + resources: + memory: "8GB" +``` #### Running the Pipeline -To execute the pipeline with the new configuration: +To execute the pipeline with the new configuration, run: + ```python python run.py --training-pipeline ``` -This command provisions the machine with the specified resource configuration. -#### Additional Notes -- Not all orchestrators support `ResourceSettings` directly. For more details, refer to the documentation on [ResourceSettings](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) and GPU integration. +#### Important Notes +- Not all orchestrators support `ResourceSettings`. +- For further details on settings and GPU attachment, refer to the ZenML documentation on runtime configuration and GPU training. -This summary captures the essential technical information and configuration steps for setting up and scaling a ZenML pipeline. +This concise summary captures the essential technical details for configuring and scaling a ZenML pipeline while omitting redundant explanations. ================================================== @@ -899,24 +963,21 @@ This summary captures the essential technical information and configuration step ### Deploying ZenML -Deploying ZenML is essential for production use. Initially, ZenML operates locally, using an SQLite database to store metadata (pipelines, models, artifacts). For production, a centralized deployment is necessary to facilitate collaboration and interaction among infrastructure components. +Deploying ZenML is essential for moving from local development to production. Initially, ZenML operates locally with an SQLite database for storing metadata (pipelines, models, artifacts). For production, the server must be deployed centrally to facilitate collaboration and interaction among infrastructure components. #### Deployment Options 1. **ZenML Pro Trial**: - - A managed SaaS solution with one-click deployment. + - A managed SaaS solution offering one-click deployment. - To connect to a trial instance, run: ```bash zenml login --pro ``` - - Additional features and a new dashboard are included. You can switch back to self-hosting later. + - Additional features and a new dashboard are included. Self-hosting is an option post-trial. -2. **Self-hosting on Cloud Provider**: - - ZenML is open source and can be self-hosted in a Kubernetes cluster. - - If you lack a cluster, create one using your cloud provider's documentation. Links for setting up on: - - [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) - - [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) - - [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin) +2. **Self-hosting on Cloud Provider**: + - ZenML is open-source and can be self-hosted in a Kubernetes cluster. + - For cluster creation, refer to documentation for [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html), [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli), and [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin). #### Connecting to Deployed ZenML @@ -924,37 +985,39 @@ To connect your local ZenML client to the ZenML Server, use: ```bash zenml login ``` -This command initiates a browser-based validation process. Once connected, all metadata will be centrally tracked. To revert to the local experience, use: +This command initiates a browser-based validation process. Once connected, all metadata will be centrally tracked. You can revert to local mode using: ```bash zenml logout ``` #### Further Resources -- [Deploying ZenML](../../getting-started/deploying-zenml/README.md): Overview of deployment options and system architecture. -- [Full how-to guides](../../getting-started/deploying-zenml/README.md): Detailed guides for deploying ZenML on various platforms (Docker, Hugging Face Spaces, Kubernetes, etc.). +- [Deploying ZenML](../../getting-started/deploying-zenml/README.md): Overview of deployment options and architecture. +- [Full how-to guides](../../getting-started/deploying-zenml/README.md): Instructions for deploying ZenML on various platforms (Docker, Hugging Face Spaces, Kubernetes). ================================================== === File: docs/book/user-guide/production-guide/connect-code-repository.md === -### Summary of Connecting a Git Repository to ZenML +### Summary of ZenML Git Integration Documentation + +**Overview**: Connect a Git repository to ZenML to enhance collaboration and optimize Docker builds in MLOps projects. -**Overview**: Connecting a Git repository to ZenML enhances MLOps by optimizing Docker builds and facilitating code collaboration. +#### Benefits of Connecting a Git Repository +- Reduces redundant Docker builds by reusing existing images based on Git commit hashes. +- Facilitates better code management and collaboration among team members. -**Pipeline Execution Flow**: +#### Pipeline Execution Flow 1. Trigger a pipeline run locally. -2. ZenML parses the `@pipeline` function. +2. ZenML parses the `@pipeline` function for necessary steps. 3. Local client requests stack info from the ZenML server. -4. If a Git repository is detected, it checks for reusable Docker images based on the current commit hash. +4. If using a Git repository, it checks for existing Docker images based on the current commit. 5. The orchestrator sets up the execution environment in the cloud. 6. Code is downloaded from the Git repository, and the existing Docker image is used. 7. Pipeline steps execute, storing artifacts in the cloud. 8. Execution status and metadata are reported back to the ZenML server. -**Benefits**: Reduces redundant builds and allows simultaneous codebase collaboration with version tracking. - -### Creating a GitHub Repository +#### Creating a GitHub Repository 1. Sign in to GitHub. 2. Click "+" and select "New repository." 3. Name the repository, set visibility, and optionally add a README or .gitignore. @@ -969,14 +1032,12 @@ git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git git push -u origin master ``` -### Linking to ZenML -To connect your GitHub repository, obtain a Personal Access Token (PAT): -1. Go to GitHub account settings > Developer settings. -2. Select "Personal access tokens" and generate a new token. -3. Name the token and grant `contents` read-only access. -4. Copy the generated token. - -**Install GitHub Integration and Register Repository**: +#### Linking ZenML to GitHub +1. Obtain a GitHub Personal Access Token (PAT): + - Go to GitHub settings > Developer settings > Personal access tokens. + - Generate a new token with `contents` read-only access for the specific repository. + +2. Install GitHub integration and register the repository: ```sh zenml integration install github zenml code-repository register --type=github \ @@ -985,14 +1046,17 @@ zenml code-repository register --type=github \ --token=YOUR_GITHUB_PERSONAL_ACCESS_TOKEN ``` -**Testing the Connection**: -Run the training pipeline: +#### Running the Pipeline +- First run builds the Docker image: +```python +python run.py --training-pipeline +``` +- Subsequent runs skip Docker building: ```python -python run.py --training-pipeline # Builds Docker image -python run.py --training-pipeline # Skips Docker building +python run.py --training-pipeline ``` -For more details, refer to the ZenML Git Integration documentation. +For further details, refer to the ZenML Git Integration documentation. ================================================== @@ -1034,38 +1098,40 @@ This documentation outlines the steps to create an end-to-end MLOps project usin ``` #### Learning Outcomes -The e2e project template demonstrates core ZenML concepts for supervised ML with batch predictions, building on the starter project. Users are encouraged to run pipelines on a remote cloud stack and a tracked Git repository. +The e2e project template demonstrates core ZenML concepts for supervised ML with batch predictions, building on the starter project. Users are encouraged to run pipelines on a remote cloud stack and a tracked Git repository to reinforce learned concepts. #### Conclusion -This guide equips you with the knowledge to implement an end-to-end MLOps project using ZenML. For further learning on advanced concepts, refer to the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md). +This guide prepares you to create an end-to-end MLOps project with ZenML, connected to cloud infrastructure. For further learning on advanced topics, refer to the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md). Good luck with your MLOps endeavors! ================================================== === File: docs/book/user-guide/production-guide/cloud-orchestration.md === -### Summary: Orchestrating MLOps Pipelines on the Cloud - -This documentation outlines the process of transitioning MLOps pipelines from local execution to a cloud environment, enhancing scalability and robustness. Key components involved in this cloud stack are: - -1. **Orchestrator**: Manages workflow and execution of pipelines. -2. **Container Registry**: Stores Docker container images. -3. **Remote Storage**: Used for storing artifacts. +### Summary of Cloud Orchestration Documentation -#### Cloud Stack Overview -The simplest orchestrator to start with is **Skypilot**, which provisions a VM on a public cloud to execute pipelines. ZenML uses **Docker** to package code and dependencies into an image that is pushed to the container registry. The orchestrator then pulls this image for execution. - -**Sequence of Events**: -1. User runs a pipeline via `run.py`. -2. Client retrieves stack info from the server. -3. Client builds and pushes the Docker image to the container registry. -4. Client creates a run in the orchestrator, which provisions a VM. -5. Orchestrator pulls the image and executes the pipeline. -6. Artifacts are stored in the artifact store. +#### Overview +This documentation outlines how to transition MLOps pipelines from local execution to a cloud environment, utilizing cloud resources for scalability and robustness. Key components include: + +- **Orchestrator**: Manages workflow and execution of pipelines. +- **Container Registry**: Stores Docker container images. +- **Remote Storage**: Completes the cloud stack for running pipelines. + +#### Cloud Stack Components +1. **Skypilot Orchestrator**: A simple option that provisions a VM on a public cloud to execute pipelines. +2. **Docker**: Used for packaging code into images that include all dependencies for pipeline execution. + +#### Pipeline Execution Sequence +1. User initiates a pipeline via `run.py`. +2. Client retrieves stack configuration from the server. +3. Client builds and pushes a Docker image to the container registry. +4. Client creates a run in the orchestrator, provisioning a VM. +5. Orchestrator pulls the Docker image from the registry. +6. Artifacts are stored in the artifact store (cloud storage). 7. Status updates are sent back to the ZenML server. -#### Provisioning and Registering Components +#### Setting Up Cloud Resources -**AWS Setup**: +##### AWS Setup 1. Install integrations: ```shell zenml integration install aws skypilot_aws -y @@ -1085,7 +1151,7 @@ The simplest orchestrator to start with is **Skypilot**, which provisions a VM o zenml container-registry connect cloud_container_registry --connector cloud_connector ``` -**GCP Setup**: +##### GCP Setup 1. Install integrations: ```shell zenml integration install gcp skypilot_gcp -y @@ -1105,7 +1171,7 @@ The simplest orchestrator to start with is **Skypilot**, which provisions a VM o zenml container-registry connect cloud_container_registry --connector cloud_connector ``` -**Azure Setup**: +##### Azure Setup 1. Install integrations: ```shell zenml integration install azure kubernetes -y @@ -1126,18 +1192,22 @@ The simplest orchestrator to start with is **Skypilot**, which provisions a VM o ``` #### Running a Pipeline -After registering components, create and set the stack: -```shell -zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry -zenml stack set minimal_cloud_stack -``` -Run the training pipeline: -```shell -python run.py --training-pipeline -``` -The pipeline will build a Docker image, push it, and execute on a cloud VM, streaming logs back to the user. - -For further exploration, refer to the **Component Guide** for various stack components integrated with ZenML. +1. Register a new stack: + ```shell + zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry + ``` +2. Set the stack active: + ```shell + zenml stack set minimal_cloud_stack + ``` +3. Execute the training pipeline: + ```shell + python run.py --training-pipeline + ``` + +Upon execution, the pipeline builds a Docker image, pushes it, and runs on a cloud VM, streaming logs back to the user. + +For further exploration, refer to the [Component Guide](../../component-guide/README.md) for various stack components integrated with ZenML. ================================================== @@ -1145,59 +1215,57 @@ For further exploration, refer to the **Component Guide** for various stack comp # Production Guide Summary -The ZenML production guide is designed for ML practitioners looking to implement MLOps in a workplace setting, building on the concepts from the Starter guide. It focuses on transitioning from local pipeline execution to cloud production environments. +The ZenML production guide is designed for MLOps Engineers looking to implement MLOps in a workplace setting, building on the concepts from the Starter Guide. It focuses on transitioning from local pipeline execution to cloud-based production environments. ## Key Topics Covered: - **Deploying ZenML**: Instructions for setting up ZenML in a production environment. -- **Understanding Stacks**: Overview of how to manage different components in MLOps. -- **Connecting Remote Storage**: Guidelines for integrating cloud storage solutions. +- **Understanding Stacks**: Overview of the components and configurations of ZenML stacks. +- **Connecting Remote Storage**: Guidance on integrating cloud storage solutions. - **Orchestrating on the Cloud**: Techniques for managing workflows in cloud environments. -- **Configuring the Pipeline for Scalability**: Strategies for adjusting compute resources. -- **Connecting a Code Repository**: Steps to link your codebase with ZenML. +- **Configuring the Pipeline for Scalability**: Strategies to ensure pipelines can handle increased workloads. +- **Code Repository Configuration**: Steps to connect and manage code repositories effectively. ## Prerequisites: -- A Python environment with `virtualenv` installed. -- CLI tools for a chosen cloud provider (AWS, GCP, Azure) must be installed and authorized. +- A prepared Python environment with `virtualenv` installed. +- Selection and setup of a cloud provider (AWS, GCP, Azure) with the necessary CLI tools authorized. -By following this guide, practitioners will complete an end-to-end MLOps project, serving as a practical reference for future implementations. +By following this guide, users will complete an end-to-end MLOps project, serving as a practical reference for future implementations. ================================================== === File: docs/book/user-guide/llmops-guide/README.md === -# LLMOps Guide Summary +# ZenML LLMOps Guide Summary -The ZenML LLMOps Guide provides a framework for integrating Large Language Models (LLMs) into MLOps workflows. It targets ML practitioners and MLOps engineers aiming to utilize LLMs while ensuring workflow robustness and scalability. +The ZenML LLMOps Guide provides a comprehensive framework for integrating Large Language Models (LLMs) into MLOps workflows. It is intended for ML practitioners and MLOps engineers aiming to leverage LLMs while ensuring robust and scalable pipelines. ## Key Topics Covered: -- **RAG with ZenML**: Introduction to Retrieval-Augmented Generation (RAG). -- **Code Examples**: - - RAG in 85 lines of code. - - Evaluation in 65 lines of code. - - Finetuning LLMs in 100 lines of code. -- **Data Handling**: +- **RAG with ZenML**: Introduction to Retrieval-Augmented Generation (RAG) and its implementation. +- **Data Handling**: - Data ingestion and preprocessing. - - Embeddings generation and storage in a vector database. -- **Pipelines**: + - Generating embeddings and storing them in a vector database. +- **Inference and Evaluation**: - Basic RAG inference pipeline. - - Reranking for improved retrieval. -- **Evaluation Metrics**: - - Retrieval and generation evaluation. - - Reranking performance evaluation. - - Evaluation for finetuning. -- **Finetuning**: - - Techniques for finetuning embeddings and LLMs. - - Use of Sentence Transformers. - - Synthetic data generation. + - Evaluation metrics for retrieval and generation. + - Reranking techniques for improved retrieval. +- **Embeddings and Finetuning**: + - Finetuning embeddings and LLMs, including using Sentence Transformers. + - Synthetic data generation for training. - Deployment of finetuned models. -## Implementation Focus: -The guide includes a practical application: building a question-answering system for ZenML, demonstrating the transition from a simple RAG pipeline to more complex setups involving finetuning and reranking. +## Implementation Example: +The guide includes a practical application of a question-answering system using RAG, demonstrating the transition from a simple pipeline to a more complex setup involving finetuning and reranking. ## Prerequisites: -Users should have a Python environment with ZenML installed and familiarity with the concepts in the Starter and Production Guides. +- A Python environment with ZenML installed. +- Familiarity with concepts from the Starter and Production Guides. + +By the end of the guide, users will understand how to effectively utilize LLMs in MLOps workflows with ZenML, enabling the development of scalable LLM-powered applications. -By the end of the guide, users will understand how to effectively leverage LLMs in MLOps workflows, enabling the creation of scalable and maintainable applications. +### Visuals: +The guide includes diagrams illustrating the simplified development and deployment of LLM-powered MLOps pipelines. + +For detailed implementations and examples, refer to the specific sections linked within the guide. ================================================== @@ -1205,27 +1273,31 @@ By the end of the guide, users will understand how to effectively leverage LLMs ### Summary: Finetuning Embeddings with Sentence Transformers -This documentation outlines the process for finetuning embeddings using the Sentence Transformers library, utilizing a dataset available on Hugging Face. +This documentation outlines the process for finetuning embeddings using the Sentence Transformers library. The pipeline involves loading a dataset, finetuning the model, evaluating the results, and visualizing them. -#### Pipeline Overview -1. **Data Loading**: Load data from Hugging Face or Argilla (using `--argilla` flag). - ```bash - python run.py --embeddings --argilla - ``` +#### Key Steps in the Pipeline: + +1. **Data Loading**: + - Load data from Hugging Face or Argilla by using the `--argilla` flag: + ```bash + python run.py --embeddings --argilla + ``` 2. **Finetuning Process**: - - **Model Loading**: Load the base model using Sentence Transformers with SDPA for efficient training. - - **Loss Function**: Employ `MatryoshkaLoss`, a wrapper around `MultipleNegativesRankingLoss`, enabling simultaneous training with varying embedding dimensions. + - **Model Loading**: Load the base model (`EMBEDDINGS_MODEL_ID_BASELINE`) using Sentence Transformers with SDPA for efficient training. + - **Loss Function**: Use a custom `MatryoshkaLoss`, which wraps `MultipleNegativesRankingLoss`, allowing simultaneous training across different embedding dimensions. - **Dataset Preparation**: Load the training dataset from a specified path and save it as a temporary JSON file. - - **Evaluator**: Create an evaluator to assess model performance during training. + - **Evaluator**: Create an evaluator with `get_evaluator()` to assess model performance during training. - **Training Arguments**: Set hyperparameters (epochs, batch size, learning rate, etc.) using `SentenceTransformerTrainingArguments`. - - **Trainer Initialization**: Initialize `SentenceTransformerTrainer` with the model, arguments, dataset, and loss function. - - **Training Execution**: Call `trainer.train()` to begin the finetuning process. - - **Model Saving**: Push the finetuned model to Hugging Face Hub. - - **Metadata Logging**: Log training parameters and hardware details. - - **Model Rehydration**: Save and reload the trained model to handle materialization errors. + - **Trainer Initialization**: Initialize `SentenceTransformerTrainer` with the model, training arguments, dataset, and loss function, then call `trainer.train()` to start training. + - **Model Saving**: After training, save the finetuned model to the Hugging Face Hub: + ```python + trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) + ``` + - **Metadata Logging**: Log training parameters and hardware information. + - **Model Rehydration**: Save the model to a temporary file, reload it into a new instance to handle materialization errors. -#### Code Snippet +#### Simplified Code Snippet: ```python # Load the base model model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE) @@ -1243,8 +1315,7 @@ trainer.train() trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` -#### Conclusion -The finetuning process enhances the model's performance across different embedding sizes and ensures the model is versioned and tracked within ZenML for observability. The pipeline concludes with an evaluation of both base and finetuned embeddings, along with visualization of the results. +The finetuning process enhances model performance across various embedding sizes and ensures the model is versioned and tracked within ZenML for observability. The pipeline concludes with an evaluation of the base and finetuned embeddings, followed by result visualization. ================================================== @@ -1252,21 +1323,27 @@ The finetuning process enhances the model's performance across different embeddi ### Summary of Synthetic Data Generation with Distilabel -This documentation outlines the process of generating synthetic data using the `distilabel` library to fine-tune embeddings based on a pre-existing dataset of technical documentation. The dataset is available on Hugging Face and consists of `page_content` and corresponding source URLs. +This documentation outlines the process of generating synthetic data using the `distilabel` library to fine-tune embeddings based on a pre-existing dataset of technical documentation. The dataset can be found [here](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0). #### Pipeline Overview 1. Load the Hugging Face dataset. -2. Use `distilabel` to generate synthetic queries. +2. Use `distilabel` to generate synthetic data. 3. Push the generated data to a new Hugging Face dataset and an Argilla instance for annotation. #### Synthetic Data Generation -`distilabel` allows for scalable synthetic data generation by leveraging LLMs. In this case, `gpt-4o` is used, but other LLMs can be integrated. The pipeline setup includes: +`distilabel` allows for scalable knowledge distillation from LLMs, generating synthetic data or providing AI feedback. In this case, we will generate queries for documentation chunks using the `gpt-4o` model. -- **Loading the Dataset**: The `page_content` is mapped to an `anchor` for easier processing. -- **Generating Queries**: The `GenerateSentencePair` step creates both positive and negative queries for each chunk, enhancing the model's ability to distinguish appropriate queries. - -**Key Code Snippet**: +**Key Code Components:** ```python +import os +from typing import Annotated, Tuple +import distilabel +from datasets import Dataset +from distilabel.llms import OpenAILLM +from distilabel.steps import LoadDataFromHub +from distilabel.steps.tasks import GenerateSentencePair +from zenml import step + @step def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> Tuple[Annotated[Dataset, "train_with_queries"], Annotated[Dataset, "test_with_queries"]]: llm = OpenAILLM(model=OPENAI_MODEL_GEN, api_key=os.getenv("OPENAI_API_KEY")) @@ -1281,15 +1358,15 @@ def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> return train_distiset["default"]["train"], test_distiset["default"]["train"] ``` +- The pipeline loads the dataset, maps `page_content` to `anchor`, and generates queries for each chunk, including both positive and negative queries. #### Data Annotation with Argilla After generating synthetic data, it is pushed to Argilla for inspection. Additional metadata is added for easier navigation: - - `parent_section`: Documentation section of the chunk. - `token_count`: Number of tokens in the chunk. - Similarity metrics between queries. -**Key Code Snippet for Formatting Data**: +**Key Code for Formatting Data:** ```python def format_data(batch): model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") @@ -1298,17 +1375,21 @@ def format_data(batch): return [vector.tolist() for vector in model.encode(batch_column)] batch["anchor-vector"] = get_embeddings(batch["anchor"]) - # Similar embeddings for questions and queries... + batch["positive-vector"] = get_embeddings(batch["positive"]) + batch["negative-vector"] = get_embeddings(batch["negative"]) def get_similarities(a, b): return [cosine_similarity([pos_vec], [neg_vec])[0][0] for pos_vec, neg_vec in zip(a, b)] batch["similarity-positive-negative"] = get_similarities(batch["positive-vector"], batch["negative-vector"]) - # Other similarity calculations... + batch["similarity-anchor-positive"] = get_similarities(batch["anchor-vector"], batch["positive-vector"]) + batch["similarity-anchor-negative"] = get_similarities(batch["anchor-vector"], batch["negative-vector"]) return batch ``` +- This function computes embeddings and similarity metrics for the generated queries. -After annotation, the next step is to fine-tune the embeddings, which can proceed even without prior annotation, assuming the generated dataset is of sufficient quality. +#### Next Steps +After data inspection and potential cleaning in Argilla, the next phase involves fine-tuning the embeddings. The code can be executed without prior annotation, assuming the generated data quality is adequate. ================================================== @@ -1316,9 +1397,9 @@ After annotation, the next step is to fine-tune the embeddings, which can procee ### Summary of Documentation on Evaluating Finetuned Embeddings -This documentation outlines the process of evaluating finetuned embeddings and comparing them to original base embeddings using the MatryoshkaLoss function. The evaluation is straightforward, as demonstrated in the provided code snippet. +This documentation outlines the process of evaluating finetuned embeddings and comparing them to base embeddings using the MatryoshkaLoss function. The evaluation steps are straightforward, as illustrated in the provided code. -#### Key Code Snippet for Base Model Evaluation +#### Key Code Snippet for Base Model Evaluation: ```python from zenml import log_model_metadata, step @@ -1337,77 +1418,72 @@ def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "ba return results ``` -#### Evaluation and Logging -- The results are logged as model metadata in ZenML, allowing inspection in the Model Control Plane. -- The evaluation results are stored as a dictionary with string keys and float values, which are versioned and tracked. +#### Evaluation Results: +- Results are logged as model metadata in ZenML, allowing inspection via the Model Control Plane. +- The evaluation output is a dictionary of string keys and float values, versioned and tracked in the artifact store. -#### Visualizing Results -- Results can be visualized using `PIL.Image` and `matplotlib`, comparing base and finetuned model evaluations side by side. -- The finetuned embeddings show improved recall, but further data refinement may be needed for production use. +#### Visualization: +- Results can be visualized using `PIL.Image` and `matplotlib` to compare base and finetuned model evaluations, represented as percentage values. -#### Model Control Plane -- The Model Control Plane provides a unified interface for inspecting results, artifacts, models, and metadata. -- Users can view the latest versions of artifacts, compare evaluation values, and inspect training parameters. +#### Insights: +- Finetuned embeddings improved recall across all dimensions, but further data refinement is needed for better performance. +- The finetuning used synthetic data from `distilabel` and `gpt-4o`, which may limit immediate improvements. -#### Next Steps -- After evaluation, finetuned embeddings can be integrated into the original RAG pipeline for further improvements. -- The next section will focus on LLM finetuning and deployment, with resources available for practical implementation. +#### Model Control Plane: +- The Model Control Plane provides a unified interface to inspect artifacts, models, logged metadata, and associated pipeline runs. +- It allows users to compare evaluation values and inspect training parameters. + +#### Next Steps: +- After evaluating embeddings, they can be integrated into the original RAG pipeline to regenerate embeddings and rerun evaluations. +- Future sections will cover LLM finetuning and deployment, with resources for starting LLM finetuning with ZenML. -For additional details, users can refer to the linked documentation and resources provided throughout the text. +For further exploration, refer to the provided links for detailed guides and project repositories. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings.md === -**Summary: Finetuning Embeddings on Synthetic Data** +**Summary: Finetuning Embeddings on Custom Synthetic Data** -This documentation outlines the process of finetuning embeddings using custom synthetic data to enhance retrieval performance in a RAG (Retrieval-Augmented Generation) pipeline. The pipeline retrieves relevant documents from a vector database and generates responses using a language model. While off-the-shelf embeddings serve as a baseline, finetuning on domain-specific data can significantly improve performance. +This documentation outlines the process of finetuning embeddings on synthetic data to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Initially, off-the-shelf embeddings are used, which serve as a baseline. To improve performance, embeddings should be finetuned on domain-specific data, particularly technical documentation. **Key Steps:** -1. **Generate Synthetic Data**: Utilize `distilabel` for generating synthetic data. +1. **Generate Synthetic Data**: Utilize `distilabel` for synthetic data generation. 2. **Finetune Embeddings**: Use Sentence Transformers for embedding finetuning. 3. **Evaluate Embeddings**: Assess the finetuned embeddings and leverage ZenML's model control plane for systematic evaluation. **Libraries Used:** -- **distilabel**: Generates synthetic data and provides AI feedback using LLMs as judges. -- **argilla**: Facilitates collaboration between AI engineers and domain experts through an interactive UI for data organization and exploration. +- **Distilabel**: Generates synthetic data and provides AI feedback using LLMs. +- **Argilla**: Facilitates collaboration between AI engineers and domain experts through an interactive UI for data organization and exploration. -Both libraries can be used independently but are more effective when combined, and their integration will be demonstrated through ZenML pipelines. - -For practical implementation, refer to the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) for complete code examples. The finetuning process can be executed locally or on cloud compute resources. +Both libraries can be used independently but are more effective when combined. The entire process can be implemented via ZenML pipelines, and detailed instructions are available in the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). The finetuning process can be executed locally or on cloud compute. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md === -### Summary of LLM Finetuning Documentation +### Summary: Finetuning LLMs -**Purpose**: This documentation focuses on finetuning Large Language Models (LLMs) for specific tasks to enhance performance and cost-effectiveness. +This documentation outlines the process of finetuning Large Language Models (LLMs) for specific tasks or to enhance performance and cost-effectiveness. Key points include: -**Context**: Previous sections covered using RAG with ZenML, evaluating RAG systems, improving retrieval through reranking, and finetuning embeddings. This section emphasizes the importance of finetuning LLMs on custom data. +- **Purpose of Finetuning**: While APIs like OpenAI and Anthropic are commonly used, finetuning an LLM on custom data can improve response generation, understanding of domain-specific terminology, prompt length, adherence to specific patterns, and latency optimization. -**When to Finetune**: -- Improve response generation in specific formats. -- Enhance understanding of domain-specific terminology. -- Reduce prompt length for consistent outputs. -- Follow specific patterns or protocols efficiently. -- Optimize for latency by minimizing context window requirements. +- **Benefits of Finetuning**: It can enhance the model's ability to generate structured responses, better comprehend specialized content, and reduce the context window needed for effective performance. -**Guide Structure**: -1. [Finetuning in 100 lines of code](finetuning-100-loc.md) -2. [Why and when to finetune LLMs](why-and-when-to-finetune-llms.md) -3. [Starter choices with finetuning](starter-choices-for-finetuning-llms.md) -4. [Finetuning with 🤗 Accelerate](finetuning-with-accelerate.md) -5. [Evaluation for finetuning](evaluation-for-finetuning.md) -6. [Deploying finetuned models](deploying-finetuned-models.md) -7. [Next steps](next-steps.md) +- **Guide Structure**: The guide covers the following topics: + - [Finetuning in 100 lines of code](finetuning-100-loc.md) + - [Why and when to finetune LLMs](why-and-when-to-finetune-llms.md) + - [Starter choices with finetuning](starter-choices-for-finetuning-llms.md) + - [Finetuning with 🤗 Accelerate](finetuning-with-accelerate.md) + - [Evaluation for finetuning](evaluation-for-finetuning.md) + - [Deploying finetuned models](deploying-finetuned-models.md) + - [Next steps](next-steps.md) -**Key Points**: -- The steps for finetuning an LLM are straightforward but require understanding the need for finetuning, performance evaluation, and data selection. -- The guide does not follow a specific use case but provides general principles applicable to various scenarios. -- For practical implementation, refer to the [llm-lora-finetuning repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning), which contains complete code that can be executed locally (with a GPU) or on cloud platforms. +- **Implementation Guidance**: The steps for finetuning are straightforward, but understanding the need for finetuning, evaluating performance, and selecting appropriate data are crucial. -This summary captures the essential aspects of the documentation while maintaining clarity and conciseness. +- **Example Repository**: For practical implementation, refer to the [llm-lora-finetuning repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning), which contains the complete code that can be executed locally (with a GPU) or on cloud platforms. + +This guide emphasizes the importance of strategic decisions in the finetuning process rather than focusing on a specific use case. ================================================== @@ -1415,17 +1491,16 @@ This summary captures the essential aspects of the documentation while maintaini ### Summary: Fine-tuning an LLM in 100 Lines of Code -This documentation provides a concise guide to implementing a fine-tuning pipeline for a language model (LLM) using TinyLlama (1.1B parameters). The example demonstrates loading the model, preparing a dataset, fine-tuning, and generating responses. +This documentation outlines a concise implementation of a fine-tuning pipeline for a language model (LLM) using the TinyLlama model (1.1B parameters). The example demonstrates loading the model, preparing a dataset, fine-tuning, and generating responses. #### Key Steps: -1. **Installation**: Required packages can be installed using: +1. **Installation**: Required packages can be installed via: ```bash pip install datasets transformers torch accelerate>=0.26.0 ``` -2. **Dataset Preparation**: - A small instruction-tuning dataset is created with input-output pairs: +2. **Dataset Preparation**: A small instruction-tuning dataset is created with clear input-output pairs. ```python def prepare_dataset() -> Dataset: data = [ @@ -1436,16 +1511,14 @@ This documentation provides a concise guide to implementing a fine-tuning pipeli return Dataset.from_list(data) ``` -3. **Tokenization**: - Each example is formatted and tokenized: +3. **Tokenization**: The dataset is formatted and tokenized for model training. ```python def tokenize_data(example: Dict[str, str], tokenizer: AutoTokenizer) -> Dict[str, torch.Tensor]: formatted_text = f"### Instruction: {example['instruction']}\n### Response: {example['response']}" return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) ``` -4. **Model Fine-tuning**: - The model is fine-tuned with specified training parameters: +4. **Model Fine-tuning**: The model is fine-tuned with specified training parameters. ```python def fine_tune_model(base_model: str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0") -> Tuple[AutoModelForCausalLM, AutoTokenizer]: tokenizer = AutoTokenizer.from_pretrained(base_model) @@ -1469,8 +1542,7 @@ This documentation provides a concise guide to implementing a fine-tuning pipeli return model, tokenizer ``` -5. **Response Generation**: - The fine-tuned model generates responses based on prompts: +5. **Response Generation**: The fine-tuned model generates responses based on new prompts. ```python def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer, max_length: int = 128) -> str: inputs = tokenizer(f"### Instruction: {prompt}\n### Response:", return_tensors="pt").to(model.device) @@ -1478,21 +1550,23 @@ This documentation provides a concise guide to implementing a fine-tuning pipeli return tokenizer.decode(outputs[0], skip_special_tokens=True) ``` +6. **Testing the Model**: The model is tested with various prompts to demonstrate its response capabilities. + #### Limitations: -- **Dataset Size**: Real tasks require larger datasets. -- **Model Size**: Larger models yield better results but need more resources. -- **Training Time**: Minimal epochs and simple learning rates are used for demonstration. -- **Evaluation**: Proper metrics and validation data are necessary for production. +- The dataset is minimal; real tasks require larger datasets. +- Larger models may yield better results but need more resources. +- The training configuration is simplified for demonstration purposes. +- Evaluation metrics and validation data are necessary for production systems. #### Next Steps: -Future sections will cover: +The guide will cover more advanced topics, including: - Larger models and datasets - Evaluation metrics -- Parameter-efficient fine-tuning (PEFT) +- Parameter-efficient fine-tuning techniques - Experiment tracking and model management - Deployment of fine-tuned models -This guide serves as a foundational example for understanding LLM fine-tuning concepts. +This implementation serves as a foundational example for understanding LLM fine-tuning. ================================================== @@ -1500,115 +1574,108 @@ This guide serves as a foundational example for understanding LLM fine-tuning co ### Summary: When to Finetune LLMs -This guide provides an overview of when and why to finetune large language models (LLMs) on custom data. Key points include: +This guide provides a practical overview for finetuning large language models (LLMs) on custom data. Key points include: -- **Finetuning Limitations**: It is not a universal solution and may not achieve desired accuracy. It introduces technical debt. -- **Use Cases Beyond Chatbots**: LLMs can be applied in various contexts, often with lower failure rates than chatbot interfaces. -- **Finetuning as a Last Resort**: It should be the final step after exploring alternatives like smaller models, Retrieval-Augmented Generation (RAG), or decomposed tasks. +- **Not a Universal Solution**: Finetuning may not solve every problem and can introduce technical debt. It should be the last resort after exploring other options. +- **Diverse Use Cases**: LLMs can be applied beyond chatbot interfaces, often with lower failure rates in non-chatbot scenarios. -### Scenarios for Finetuning LLMs: -1. **Domain-Specific Knowledge**: Necessary for deep understanding in specialized fields (e.g., medical, legal). +### When to Consider Finetuning + +Finetuning is beneficial in the following scenarios: + +1. **Domain-Specific Knowledge**: Necessary for deep understanding in specialized fields (e.g., medical, legal). RAG is often better for novel domains. 2. **Consistent Style/Format**: Required for specific output formats, such as code generation. -3. **Task Accuracy**: Needed for critical applications demanding higher accuracy. -4. **Proprietary Information**: Essential when handling confidential data that cannot be sent to external APIs. -5. **Custom Instructions**: Useful for frequently used prompts, reducing latency and costs. -6. **Efficiency**: Can improve performance with shorter prompts. +3. **Improved Task Accuracy**: Needed for critical application tasks. +4. **Handling Proprietary Information**: Essential for confidential data that cannot be sent to external APIs. +5. **Custom Instructions/Prompts**: Repeated prompts can be integrated into the model, saving latency and costs. +6. **Improved Efficiency**: Finetuning may enhance performance with shorter prompts. + +### Decision Flowchart -### Decision Flowchart: ```mermaid flowchart TD A[Should I finetune an LLM?] --> B{Is prompt engineering
sufficient?} B -->|Yes| C[Use prompt engineering] - B -->|No| D{Is it primarily a
knowledge retrieval
problem?} + B -->|No| D{Is it a knowledge retrieval
problem?} D -->|Yes| E{Is real-time data
access needed?} E -->|Yes| F[Use RAG] - E -->|No| G{Is data volume
very large?} + E -->|No| G{Is data volume
large?} G -->|Yes| H[Consider hybrid:
RAG + Finetuning] G -->|No| F D -->|No| I{Is it a narrow,
specific task?} - I -->|Yes| J{Can a smaller
specialized model
handle it?} + I -->|Yes| J{Can a smaller
model handle it?} J -->|Yes| K[Use smaller model] J -->|No| L[Consider finetuning] - I -->|No| M{Do you need
consistent style
or format?} + I -->|No| M{Do you need
consistent style?
or format?} M -->|Yes| L M -->|No| N{Is deep domain
expertise required?} - N -->|Yes| O{Is the domain
well-represented in
base model?} + N -->|Yes| O{Is the domain
well-represented?} O -->|Yes| P[Use base model] O -->|No| L - N -->|No| Q{Is data
proprietary/sensitive?} + N -->|No| Q{Is data
proprietary?} Q -->|Yes| R{Can you use
API solutions?} R -->|Yes| S[Use API solutions] R -->|No| L Q -->|No| S ``` -### Alternatives to Finetuning: -- **Prompt Engineering**: Effective for many use cases without finetuning. -- **RAG**: Often more effective for specific knowledge bases. -- **Smaller Models**: Better for narrow tasks. -- **API Solutions**: Simpler and cost-effective if sensitive data isn't involved. +### Alternatives to Finetuning + +Before finetuning, consider: -Finetuning can be beneficial but should only be pursued after considering simpler solutions and confirming a clear need for its advantages. The next section will cover practical considerations for finetuning LLMs. +- **Prompt Engineering**: Often effective without finetuning. +- **Retrieval-Augmented Generation (RAG)**: More effective for specific knowledge bases. +- **Smaller Task-Specific Models**: May outperform finetuned LLMs for narrow tasks. +- **API-Based Solutions**: Simpler and cost-effective for non-sensitive data. + +Finetuning can be powerful but should be approached cautiously, starting with simpler solutions before considering it as a necessary step. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-finetuning-llms.md === -# Summary of Finetuning LLMs Documentation +### Summary of Finetuning LLMs Documentation -## Overview -This guide provides essential steps for finetuning large language models (LLMs) tailored to specific tasks and datasets. Key steps include selecting a use case, gathering data, choosing a base model, and evaluating success. +This guide provides a structured approach to finetuning large language models (LLMs) tailored to specific tasks. Key steps include selecting a use case, gathering data, choosing a base model, and evaluating success. -## Quick Assessment Questions +#### Quick Assessment Questions Before starting, consider: -1. **Success Definition**: Can you quantify success? (e.g., "95% accuracy in extracting order IDs") -2. **Data Readiness**: Is your data prepared? (e.g., "1000 labeled support tickets") -3. **Task Consistency**: Is the task well-defined? (e.g., "Convert email to 5 specific fields") -4. **Verification**: Can a human verify correctness? (e.g., "Check if extracted date matches document") +1. **Define Success**: Use measurable metrics (e.g., "95% accuracy in extracting order IDs"). +2. **Data Readiness**: Ensure data is prepared (e.g., "1000 labeled support tickets"). +3. **Task Consistency**: Focus on consistent tasks (e.g., "Convert email to 5 specific fields"). +4. **Human Verification**: Ensure correctness can be verified (e.g., "Check if extracted date matches document"). -## Picking a Use Case +#### Picking a Use Case Choose a small, manageable use case that cannot be easily solved by non-LLM methods. Examples include: - **Good Use Cases**: Structured data extraction, domain-specific classification, standardized response generation. - **Challenging Use Cases**: Open-ended chat, creative writing, general knowledge QA. -## Data Selection +#### Picking Data Select data that closely aligns with your use case. Aim for hundreds to thousands of examples. Examples of reusable data include: -- Customer support email responses -- Manually extracted metadata - -### Good vs. Not-So-Good Use Cases -| Good Use Cases | Why It Works | Example | Data Requirements | -|----------------|--------------|---------|-------------------| -| Structured Data Extraction | Clear inputs/outputs | Extracting order details | 500-1000 annotated emails | -| Domain-Specific Classification | Well-defined categories | Categorizing support tickets | 1000+ labeled examples | - -| Challenging Use Cases | Why It's Tricky | Alternative Approach | -|-----------------------|------------------|---------------------| -| Open-ended Chat | Hard to measure success | Use instruction tuning | -| Creative Writing | Subjective quality | Focus on specific formats | - -## Success Indicators -Evaluate your use case through indicators: -| Indicator | Good Sign | Warning Sign | -|-----------|-----------|--------------| -| Task Scope | "Extract purchase date" | "Handle all inquiries" | -| Output Format | Structured JSON | Free-form text | -| Data Availability | 500+ examples | Need to create examples | -| Evaluation Method | Field-by-field metrics | User feedback | - -## Picking a Base Model +- Customer support email responses. +- Manually extracted metadata. + +#### Success Indicators +Evaluate your use case using indicators: +- **Task Scope**: Specific tasks (e.g., "Extract purchase date"). +- **Output Format**: Structured outputs vs. free-form text. +- **Data Availability**: Sufficient examples ready for use. +- **Evaluation Method**: Clear metrics vs. subjective feedback. +- **Business Impact**: Tangible benefits vs. vague goals. + +#### Picking a Base Model Select a base model based on your use case: -- **Llama 3.1-8B**: Structured data extraction, requires 16GB GPU RAM. -- **Llama 3.1-70B**: Complex reasoning, requires 80GB GPU RAM. -- **Mistral 7B**: General text generation, requires 16GB GPU RAM. -- **Phi-2**: Lightweight tasks, requires 8GB GPU RAM. +- **Llama 3.1 8B**: Best for structured data extraction, requires 16GB GPU RAM. +- **Llama 3.1 70B**: Suitable for complex reasoning, requires 80GB GPU RAM. +- **Mistral 7B**: Good for general text generation, requires 16GB GPU RAM. +- **Phi-2**: Ideal for lightweight tasks, requires 8GB GPU RAM. -### Model Selection Matrix +#### Model Selection Matrix ```mermaid graph TD A[Choose Your Task] --> B{Structured Output?} @@ -1620,35 +1687,34 @@ graph TD F -->|No| H[Mistral-7B] ``` -## Evaluating Success +#### Evaluating Success Define success metrics early. For structured data extraction, consider: -- Accuracy of extracted fields -- Precision and recall -- Processing time -- Error rates +- Accuracy of extracted fields. +- Precision and recall for specific types. +- Processing time and error rates. -## Next Steps -With a clear understanding of scoping, data selection, and evaluation, proceed to the technical implementation in the next section, which covers practical finetuning using the Accelerate library. +#### Next Steps +With a clear understanding of scoping, data selection, and evaluation, proceed to the technical implementation, starting with practical examples using the Accelerate library. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md === -# Summary of LLM Finetuning Evaluations +# Evaluation for LLM Finetuning ## Overview -Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help ensure that models behave as expected, catch issues early, and track progress over time. An incremental approach to building evaluation sets is recommended to facilitate early implementation and continuous improvement. +Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help ensure the model behaves as expected, catch issues early, and track progress over time. An incremental approach to building evaluations is recommended to avoid paralysis by analysis. ## Motivation and Benefits Key motivations for implementing evals include: 1. **Prevent Regressions**: Ensure new changes do not negatively impact existing functionality. -2. **Track Improvements**: Quantify and visualize model enhancements over iterations. +2. **Track Improvements**: Quantify model improvements with each iteration. 3. **Ensure Safety and Robustness**: Identify and mitigate risks, biases, or unexpected behaviors. A robust evaluation strategy leads to more reliable and performant finetuned LLMs. ## Types of Evaluations -While generic evaluation frameworks are common, custom evaluations tailored to specific use cases are also important. Evaluations can be categorized into: +While generic evaluation frameworks are common, custom evaluations tailored to specific use cases are also important. They can be categorized into: 1. **Success Modes**: Focus on desired outputs, such as: - Correct formatting @@ -1660,9 +1726,7 @@ While generic evaluation frameworks are common, custom evaluations tailored to s - Incorrect formats - Biased or incoherent responses -### Custom Evaluation Example -A simple implementation for testing success and failure modes could look like this: - +### Example Code for Custom Evals ```python from my_library import query_llm @@ -1685,12 +1749,12 @@ for question, answers in bad_responses.items(): ``` ## Generalized Evals and Frameworks -Generalized evals provide structured evaluation approaches, including: -- Organization and structuring of evals -- Standardized metrics for common tasks -- Insights into overall model performance +Generalized evals provide structured approaches to evaluation, including: +- Organization of evals +- Standardized metrics +- Insights into overall performance -Examples of frameworks include: +Complement generalized evals with custom ones for specific needs. Recommended frameworks include: - [prodigy-evaluate](https://github.com/explosion/prodigy-evaluate) - [ragas](https://docs.ragas.io/en/stable/getstarted/monitoring.html) - [giskard](https://docs.giskard.ai/en/stable/getting_started/quickstart/quickstart_llm.html) @@ -1698,14 +1762,14 @@ Examples of frameworks include: - [nervaluate](https://github.com/MantisAI/nervaluate) ## Data and Tracking -Regular analysis of inference data is crucial for identifying patterns and areas for improvement. Implementing comprehensive logging and using frameworks for structured data collection can streamline this process. Recommended options include: +Regularly analyze inference data to identify patterns and areas for improvement. Implement comprehensive logging early on to track model behavior and progress. Consider using frameworks for structured data collection and analysis, such as: - [weave](https://github.com/wandb/weave) - [openllmetry](https://github.com/traceloop/openllmetry) - [langsmith](https://smith.langchain.com/) - [langfuse](https://langfuse.com/) - [braintrust](https://www.braintrust.dev/) -Creating simple dashboards to visualize core metrics reflecting model performance is also beneficial for monitoring progress and guiding future iterations. +Creating simple dashboards to visualize core metrics can effectively monitor progress and assess the impact of changes. Focus on key metrics aligned with iteration goals, and prioritize simplicity over perfection. ================================================== @@ -1717,24 +1781,24 @@ After iterating on your finetuned model, consider the following key areas: - Identify factors that improve or worsen model performance. - Determine the minimum viable model size. -- Assess the feasibility of iteration within your company’s hardware constraints. -- Ensure the finetuned model addresses the intended business use case. +- Assess the feasibility of iteration within your company's hardware constraints. +- Ensure the model effectively addresses the business use case. Next stages may involve: - Scaling for more users or real-time scenarios. -- Meeting critical accuracy requirements, potentially necessitating larger models. -- Integrating LLM finetuning into your business systems, focusing on monitoring, logging, and evaluation. +- Meeting critical accuracy requirements, potentially necessitating a larger model. +- Integrating LLM finetuning into your business systems, including monitoring and evaluation. -While it may be tempting to switch to larger models, prioritize enhancing your dataset, especially if starting with limited examples. Improving data quality is crucial before upgrading to more powerful models. +While it may be tempting to switch to larger models, enhancing your dataset is often more impactful, especially if starting with limited examples. Focus on improving data quality before upgrading to more powerful models. ## Resources -Recommended resources for LLM finetuning: +Recommended resources for further learning on LLM finetuning: - [Mastering LLMs Course](https://parlance-labs.com/education/) - Video course by Hamel Husain and Dan Becker. -- [Phil Schmid's blog](https://www.philschmid.de/) - Examples of LLM finetuning with the latest techniques. -- [Sam Witteveen's YouTube channel](https://www.youtube.com/@samwitteveenai) - Videos on finetuning, prompt engineering, and base model explorations. +- [Phil Schmid's Blog](https://www.philschmid.de/) - Examples of LLM finetuning techniques. +- [Sam Witteveen's YouTube Channel](https://www.youtube.com/@samwitteveenai) - Videos on finetuning, prompt engineering, and base model explorations. ================================================== @@ -1742,91 +1806,88 @@ Recommended resources for LLM finetuning: # Deployment Options for Finetuned LLMs -Deploying a finetuned LLM is essential for real-world applications, requiring careful planning for performance, reliability, and cost-effectiveness. +Deploying your finetuned LLM is essential for real-world applications. Key considerations include: ## Deployment Considerations -Key factors influencing deployment include: -- **Hardware Requirements**: LLMs demand significant RAM and processing power. Choose hardware that balances performance and cost. -- **Real-Time Needs**: Consider failover scenarios, conduct benchmarks, and model expected user loads. Decide between streaming and non-streaming approaches. -- **Optimization Techniques**: Techniques like quantization can reduce resource usage but may impact performance. Rigorous evaluation is necessary to maintain accuracy. +- **Resource Requirements**: LLMs need substantial RAM, processing power, and specialized hardware. Balance performance and cost based on use case. +- **Real-Time Needs**: Plan for immediate responses, failover scenarios, and conduct load testing. +- **Streaming vs. Non-Streaming**: Choose based on latency and resource use. +- **Optimization Techniques**: Use methods like quantization to reduce resource usage, but evaluate their impact on performance. ## Deployment Options and Trade-offs -1. **Roll Your Own**: Set up and manage your infrastructure for maximum control but requires expertise. -2. **Serverless Options**: Scalable and cost-efficient, but may face latency issues due to "cold starts." -3. **Always-On Options**: Minimizes latency but incurs costs even during idle times. -4. **Fully Managed Solutions**: Simplifies deployment but may offer less flexibility and higher costs. +1. **Roll Your Own**: Set up and manage your own infrastructure (e.g., Docker, FastAPI). Offers control but requires expertise. +2. **Serverless Options**: Scalable and cost-efficient, but may face latency issues due to cold starts. +3. **Always-On Options**: Minimizes latency but incurs costs during idle periods. +4. **Fully Managed Solutions**: Simplifies deployment but may limit flexibility and increase costs. -Consider your team's expertise, budget, expected load, and specific requirements when choosing a deployment option. +Consider team expertise, budget, load patterns, and specific requirements when choosing a deployment option. ## Deployment with vLLM and ZenML -[vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML integrates with vLLM, allowing easy deployment of finetuned models. +[vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML integrates with vLLM for easy deployment. ```python from zenml import pipeline +from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() -def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> VLLMDeploymentService: +def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "my_finetuned_llm"]: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` -The `model` argument can be a local path or a Hugging Face Hub ID, deploying the model locally for batch inference using an OpenAI-compatible API. +The `model` argument can be a local path or a Hugging Face Hub ID. ## Cloud-Specific Deployment Options -- **AWS**: - - **Amazon SageMaker**: Fully managed ML platform for real-time inference and automatic scaling. - - **Serverless**: AWS Lambda with API Gateway for real-time responses (watch for cold starts). - - **ECS/EKS with Fargate**: Container orchestration with more control but added complexity. - -- **GCP**: - - **Google Cloud AI Platform**: Managed ML services similar to SageMaker. - - **Cloud Run**: Serverless option for containerized LLMs. - - **GKE**: For more control over compute resources. +- **AWS**: Use Amazon SageMaker for managed ML services, or AWS Lambda with API Gateway for serverless deployment. Amazon ECS or EKS with Fargate offers more control. +- **GCP**: Google Cloud AI Platform provides managed services similar to SageMaker. Cloud Run offers serverless options, while GKE allows for containerized model deployment. -## Architectures for Real-Time Engagement -Deploy models behind a load balancer with auto-scaling for responsiveness. Implement caching (e.g., Redis) for frequent responses and use asynchronous architectures with message queues (e.g., Amazon SQS) for complex queries. For global deployments, consider edge computing services like AWS Lambda@Edge to reduce latency. +## Architectures for Real-Time Customer Engagement +Deploy models behind a load balancer with auto-scaling for responsiveness. Implement caching (e.g., Redis) for frequent responses and use message queues (e.g., Amazon SQS) for complex queries. Consider edge computing for global deployments to reduce latency. ## Reducing Latency and Increasing Throughput -Optimize for low latency and high throughput using: -- **Model Optimization**: Techniques like quantization and distillation. -- **Hardware Acceleration**: Use GPU instances for faster inference. -- **Request Batching**: Process multiple inputs simultaneously. -- **Monitoring and Profiling**: Identify bottlenecks and continuously refine performance. +- **Model Optimization**: Use quantization and distillation to enhance performance. +- **Hardware Acceleration**: Leverage GPU instances for faster inference. +- **Request Batching**: Process multiple inputs simultaneously to increase throughput. +- **Monitoring**: Continuously measure and optimize the deployment. ## Monitoring and Maintenance -Post-deployment, focus on: -1. **Evaluation Failures**: Regular performance checks. +Key areas to monitor include: +1. **Evaluation Failures**: Regularly assess model performance. 2. **Latency Metrics**: Ensure response times meet requirements. -3. **Load Patterns**: Monitor user interactions for scaling insights. -4. **Data Analysis**: Identify trends and biases in model outputs. +3. **Load Patterns**: Analyze user interactions for scaling decisions. +4. **Data Analysis**: Identify trends and biases in model inputs/outputs. -Ensure logging practices comply with data protection regulations. By considering these deployment options and maintaining vigilant monitoring, you can ensure optimal performance of your finetuned LLM. +Ensure compliance with data protection regulations in logging practices. By implementing these strategies, you can maintain optimal performance of your finetuned LLM. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md === -### Summary: Finetuning an LLM with Accelerate and PEFT +# Finetuning an LLM with Accelerate and PEFT + +This documentation outlines the process of finetuning language models using the Viggo dataset, which consists of over 5,000 pairs of structured meaning representations and natural language descriptions for video game dialogues. The goal is to train models to generate fluent responses from these structured inputs. -This documentation outlines the process of finetuning a language model (LLM) using the Viggo dataset, which consists of over 5,000 pairs of structured meaning representations and their corresponding natural language descriptions for video game dialogues. The goal is to train models to generate fluent responses from these structured inputs. +## Finetuning Pipeline + +The finetuning pipeline includes the following steps: -#### Finetuning Pipeline Steps 1. **prepare_data**: Load and preprocess the Viggo dataset. 2. **finetune**: Finetune the model on the dataset. 3. **evaluate_base**: Evaluate the base model before finetuning. 4. **evaluate_finetuned**: Evaluate the finetuned model. 5. **promote**: Promote the best-performing model to "staging" in the Model Control Plane. -For initial experiments, it is recommended to start with smaller models (e.g., Llama 3.1 ~8B parameters) to facilitate rapid iteration. +For initial experiments, it is recommended to start with smaller models (e.g., Llama 3.1 family at ~8B parameters) to facilitate quick iterations. + +## Implementation Details -#### Implementation Details -The `prepare_data` step is minimal, focusing on loading and tokenizing data. Care should be taken with input data formats, especially for instruction-tuned models. Logging inputs and outputs during finetuning is advised. +The `prepare_data` step is minimal, focusing on loading and tokenizing the dataset. Care should be taken with input data formatting, especially for instruction-tuned models. Logging inputs and outputs during finetuning is advised. -The finetuning process utilizes the `accelerate` library for multi-GPU support. The following code snippet illustrates the setup: +The finetuning process utilizes the `accelerate` library for multi-GPU support. The core finetuning code is as follows: ```python -model = load_base_model(base_model_id, use_accelerate=True) +model = load_base_model(base_model_id, use_accelerate=use_accelerate) trainer = transformers.Trainer( model=model, @@ -1834,7 +1895,6 @@ trainer = transformers.Trainer( eval_dataset=tokenized_val_dataset, args=transformers.TrainingArguments( output_dir=output_dir, - warmup_steps=warmup_steps, per_device_train_batch_size=per_device_train_batch_size, max_steps=max_steps, learning_rate=lr, @@ -1842,7 +1902,6 @@ trainer = transformers.Trainer( save_strategy="steps", evaluation_strategy="steps", do_eval=True, - label_names=["input_ids"], ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), callbacks=[ZenMLCallback(accelerator=accelerator)], @@ -1851,33 +1910,39 @@ trainer = transformers.Trainer( Key points: - `ZenMLCallback` logs metrics to ZenML. -- `gradient_checkpointing_kwargs` enables gradient checkpointing with Accelerate. +- `gradient_checkpointing_kwargs` enables gradient checkpointing when using Accelerate. - Evaluation metrics are computed using the `evaluate` library, focusing on ROUGE scores (ROUGE-N, ROUGE-L, ROUGE-W, ROUGE-S). -#### Using ZenML Accelerate Decorator -The `@run_with_accelerate` decorator simplifies distributed training setup: +## Using the ZenML Accelerate Decorator + +ZenML offers a `@run_with_accelerate` decorator for simplified distributed training setup: ```python @run_with_accelerate(num_processes=4, multi_gpu=True, mixed_precision='bf16') @step -def finetune_step(...): +def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id, output_dir): model = load_base_model(base_model_id, use_accelerate=True) - trainer = transformers.Trainer(...) + + trainer = transformers.Trainer( + # ... trainer setup as shown above + ) + trainer.train() return trainer.model ``` -This approach offers cleaner configuration management and requires a properly configured Docker environment with CUDA support. +This approach separates distributed training configuration from model logic and requires a properly configured Docker environment with CUDA support. -#### Dataset Iteration -Careful attention to input data is crucial. Poorly formatted data can lead to suboptimal model performance. Regular inspection of data at all stages is recommended. Consider supplementing or synthetically generating data if necessary. +## Dataset Iteration -As evaluations are established, focus can shift to optimizing parameters and assessing model performance. Key considerations include: -- Better evaluation methods -- Model serving and inference strategies -- Integration within existing production architectures +Careful attention to input data is crucial. Poor performance post-finetuning may indicate issues with data formatting or tokenizer mismatches. Regular inspection of data at all stages is recommended. Consider supplementing or synthetically generating data if necessary. -Aiming for smaller, efficient models tailored to specific use cases can lead to better outcomes. +As evaluations are established, focus on optimal parameters and their effects. Future considerations include: +- Improved evaluations +- Model serving and inference +- Integration within existing production architecture + +A goal may be to minimize model size while maintaining acceptable performance for specific use cases. Evaluations play a key role in achieving this balance. ================================================== @@ -1885,14 +1950,14 @@ Aiming for smaller, efficient models tailored to specific use cases can lead to ### Summary of Data Ingestion and Preprocessing for RAG Pipelines with ZenML -**Overview**: This documentation outlines the steps for ingesting and preprocessing data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. The focus is on setting up a simple yet effective data ingestion process. +**Overview**: This documentation outlines the steps to ingest and preprocess data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. The process involves scraping, loading, and preprocessing documents to train retriever and generator models. #### Data Ingestion -1. **Purpose**: Ingest data (documents and metadata) to train retriever and generator models. -2. **Tools**: ZenML integrates with various tools for managing data ingestion, including downloading and indexing documents. -3. **URL Scraping**: A ZenML step is implemented to scrape URLs from ZenML documentation. - **Code Example**: +1. **Initial Setup**: The first step is to gather a corpus of documents and relevant metadata. ZenML can integrate with various tools for data ingestion, preprocessing, and indexing. + +2. **URL Scraping**: A ZenML step is created to scrape URLs from ZenML documentation. The `url_scraper` function utilizes a helper utility to retrieve a unique set of URLs. + ```python from typing import List from typing_extensions import Annotated @@ -1900,19 +1965,14 @@ Aiming for smaller, efficient models tailored to specific use cases can lead to from steps.url_scraping_utils import get_all_pages @step - def url_scraper( - docs_url: str = "https://docs.zenml.io", - repo_url: str = "https://github.com/zenml-io/zenml", - website_url: str = "https://zenml.io", - ) -> Annotated[List[str], "urls"]: - docs_urls = get_all_pages(docs_url) + def url_scraper() -> Annotated[List[str], "urls"]: + docs_urls = get_all_pages("https://docs.zenml.io") log_artifact_metadata({"count": len(docs_urls)}) return docs_urls ``` -4. **Loading Documents**: The `unstructured` library is used to load and parse HTML content from the scraped URLs. +3. **Loading Documents**: The `web_url_loader` step loads and parses HTML pages using the `unstructured` library, simplifying text extraction. - **Code Example**: ```python from typing import List from unstructured.partition.html import partition_html @@ -1920,33 +1980,34 @@ Aiming for smaller, efficient models tailored to specific use cases can lead to @step def web_url_loader(urls: List[str]) -> List[str]: - return ["\n\n".join([str(el) for el in partition_html(url=url)]) for url in urls] + return ["\n\n".join([str(el) for el in partition_html(url)]) for url in urls] ``` #### Data Preprocessing -1. **Chunking**: After loading documents, they are split into smaller chunks for efficient processing by LLMs. The chosen chunk size is 500 characters with an overlap of 50 characters. - **Code Example**: +1. **Chunking Documents**: After loading, documents are preprocessed into manageable chunks. The `preprocess_documents` step splits long strings into smaller segments, balancing chunk size and overlap. + ```python import logging from typing import Annotated, List from utils.llm_utils import split_documents from zenml import ArtifactConfig, log_artifact_metadata, step - logging.basicConfig(level=logging.INFO) - @step(enable_cache=False) def preprocess_documents(documents: List[str]) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]: log_artifact_metadata({"chunk_size": 500, "chunk_overlap": 50}) return split_documents(documents, chunk_size=500, chunk_overlap=50) ``` -2. **Considerations**: The chunk size should be tailored to the data's structure. Overlapping chunks help retain important information. +2. **Chunk Size Considerations**: Choosing an appropriate chunk size is crucial. For documentation, a chunk size of 500 with a 50-character overlap is recommended to ensure relevant information is retained. -#### Additional Resources -- For complete code and further details, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and check the [steps](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/). +#### Additional Notes + +- The documentation emphasizes the importance of understanding data structure to determine optimal chunk sizes. +- More complex preprocessing, such as text cleaning or metadata extraction, can be added as needed. +- For complete code examples and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). -This summary provides a concise overview of the data ingestion and preprocessing steps necessary for setting up RAG pipelines using ZenML, retaining all critical technical information and code examples. +This summary captures the essential steps and code snippets for setting up a RAG pipeline with ZenML, focusing on data ingestion and preprocessing. ================================================== @@ -1954,54 +2015,70 @@ This summary provides a concise overview of the data ingestion and preprocessing ### Summary of RAG Inference Documentation -This documentation outlines the process of using Retrieval-Augmented Generation (RAG) components to generate responses to user prompts based on documents stored in an index. +This documentation outlines how to use Retrieval-Augmented Generation (RAG) components to generate responses based on queries using an index store of documents. -#### Key Components: +#### Running Inference -1. **Inference Query**: - - To execute an inference query, use the following command: - ```bash - python run.py --rag-query "your query here" --model=gpt4 - ``` +To execute a query, use the following command in Python: -2. **Inference Function**: - - The core function for processing input and retrieving relevant documents is defined as follows: - ```python - def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: - related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) - system_message = """You are a friendly chatbot. You can answer questions about ZenML...""" - messages = [ - {"role": "system", "content": system_message}, - {"role": "user", "content": f"```{input}```"}, - {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, - ] - return get_completion_from_messages(messages, model=model) - ``` +```bash +python run.py --rag-query "your query here" --model=gpt4 +``` -3. **Document Retrieval**: - - The function `get_topn_similar_docs` retrieves the most similar documents based on the query embedding: - ```python - def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: - embedding_array = np.array(query_embedding) - cur = conn.cursor() - cur.execute(f"SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,)) - return cur.fetchall() - ``` - - This utilizes the `pgvector` PostgreSQL plugin for efficient similarity ordering. +This command triggers a function call that utilizes the outputs and components of the RAG pipeline. -4. **Generating Responses**: - - The function `get_completion_from_messages` generates a response using the specified LLM: - ```python - def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): - completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) - return completion_response.choices[0].message.content - ``` - - `litellm` serves as a universal interface for various LLMs, allowing flexibility in model selection. +#### Inference Pipeline Code -#### Conclusion: -This documentation provides a foundational understanding of setting up a basic RAG inference pipeline, focusing on document retrieval and response generation. Future improvements may involve fine-tuning embeddings for better performance with diverse document sets. +The core function for processing input with retrieval is defined as follows: + +```python +def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: + delimiter = "```" + related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) + + system_message = """You are a friendly chatbot. You can answer questions about ZenML, its features, and use cases. Respond in a concise, technically credible tone using only ZenML documentation.""" + + messages = [ + {"role": "system", "content": system_message}, + {"role": "user", "content": f"{delimiter}{input}{delimiter}"}, + {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, + ] + + return get_completion_from_messages(messages, model=model) +``` + +#### Document Retrieval + +The `get_topn_similar_docs` function retrieves the most similar documents based on the query embedding: + +```python +def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: + embedding_array = np.array(query_embedding) + register_vector(conn) + cur = conn.cursor() + cur.execute(f"SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,)) + return cur.fetchall() +``` + +This function leverages the `pgvector` PostgreSQL extension for efficient similarity search. + +#### Generating Responses + +The `get_completion_from_messages` function generates a response from the LLM: + +```python +def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): + completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) + return completion_response.choices[0].message.content +``` + +The `litellm` library provides a unified interface for various LLMs, facilitating experimentation with different models without code rewrites. + +#### Conclusion + +This basic RAG inference pipeline retrieves relevant text chunks based on a query, laying the groundwork for more complex setups and potential improvements in retrieval performance through fine-tuning embeddings. -For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide), especially the [`llm_utils.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py) file. +For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [`llm_utils.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py) file. ================================================== @@ -2009,22 +2086,20 @@ For complete code examples, refer to the [Complete Guide](https://github.com/zen ### Summary: Storing Embeddings in a Vector Database -This documentation outlines the process of storing embeddings in a vector database, specifically PostgreSQL, for efficient retrieval based on similarity to queries. Storing embeddings allows for quick access without needing to regenerate them each time. +This guide explains how to store embeddings in a vector database, specifically using PostgreSQL, to facilitate efficient retrieval based on similarity to queries. Storing embeddings allows for quick access without the need to regenerate them each time. #### Key Points: +- **Vector Database**: PostgreSQL is chosen for its scalability and efficiency in handling high-dimensional vectors. Other vector databases can also be used. +- **Setup Instructions**: For setting up PostgreSQL, refer to the [repository instructions](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). -- **Purpose**: Store embeddings to enable fast retrieval of relevant document chunks. -- **Database Choice**: PostgreSQL is used due to its scalability and efficiency for high-dimensional vectors. Other vector databases can also be utilized. -- **Setup**: Instructions for setting up PostgreSQL using Supabase are available in the repository. - +#### Connection and Interaction: +- Use the `psycopg2` package for database connections and raw SQL for interactions. + #### Code Overview: - -The following Python code demonstrates how to index embeddings in PostgreSQL using the `psycopg2` package: +The following Python code snippet demonstrates the process of indexing documents and their embeddings: ```python from zenml import step -import math -from typing import List @step def index_generator(documents: List[Document]) -> None: @@ -2053,7 +2128,7 @@ def index_generator(documents: List[Document]) -> None: cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (doc.page_content,)) if cur.fetchone()[0] == 0: cur.execute(""" - INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) + INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) VALUES (%s, %s, %s, %s, %s, %s)""", (doc.page_content, doc.token_count, doc.embedding.tolist(), doc.filename, doc.parent_section, doc.url)) conn.commit() @@ -2061,9 +2136,7 @@ def index_generator(documents: List[Document]) -> None: num_records = cur.execute("SELECT COUNT(*) FROM embeddings;").fetchone()[0] num_lists = max(num_records / 1000, 10) if num_records <= 1000000 else math.sqrt(num_records) - cur.execute(f""" - CREATE INDEX IF NOT EXISTS embeddings_idx ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = {num_lists}); - """) + cur.execute(f"CREATE INDEX IF NOT EXISTS embeddings_idx ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = {num_lists});") conn.commit() except Exception as e: @@ -2077,15 +2150,17 @@ def index_generator(documents: List[Document]) -> None: #### Functionality: - Connects to the database and creates the `vector` extension. - Creates an `embeddings` table if it doesn't exist. -- Inserts new embeddings only if they are not already present. -- Calculates index parameters for optimal performance. -- Creates an index using the `ivfflat` method for cosine distance similarity search. +- Inserts documents and embeddings only if they are not already present. +- Calculates index parameters and creates an index using the `ivfflat` method for cosine similarity. #### Considerations: -- Update strategies for embeddings depend on data change frequency. -- Running on a GPU can enhance performance for large datasets. +- Decide when to update embeddings based on data change frequency. +- For large datasets, consider running on a GPU-enabled machine for performance. + +#### Next Steps: +After storing embeddings, the next step involves retrieving relevant documents based on queries, enhancing the efficiency of the question-answering system. -This setup enables efficient retrieval of documents based on their embeddings, facilitating the development of responsive question-answering systems. For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +For the complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== @@ -2095,49 +2170,26 @@ This setup enables efficient retrieval of documents based on their embeddings, f This documentation outlines a simple implementation of a Retrieval-Augmented Generation (RAG) pipeline in 85 lines of Python code. The pipeline performs the following tasks: -1. **Data Loading**: Uses a fictional dataset about 'ZenML World' as the corpus. -2. **Text Processing**: Splits text into chunks and tokenizes it (converts to words). -3. **Query Handling**: Takes a user query and retrieves the most relevant text chunks from the corpus. -4. **Answer Generation**: Utilizes OpenAI's GPT-3.5 model to generate answers based on the retrieved chunks. +1. **Data Loading**: Utilizes a fictional dataset about 'ZenML World' as the corpus. +2. **Text Processing**: Splits text into chunks and tokenizes it (converts text into words). +3. **Query Handling**: Accepts a user query and retrieves the most relevant text chunks from the corpus. +4. **Response Generation**: Uses OpenAI's GPT-3.5 model to generate answers based on the relevant chunks. ### Key Functions -- **`preprocess_text(text)`**: Normalizes text by converting to lowercase, removing punctuation, and trimming whitespace. - -- **`tokenize(text)`**: Tokenizes preprocessed text into words. - -- **`retrieve_relevant_chunks(query, corpus, top_n=2)`**: - - Tokenizes the query. - - Computes Jaccard similarity between the query and each chunk in the corpus. - - Returns the top N relevant chunks based on similarity. - -- **`answer_question(query, corpus, top_n=2)`**: - - Retrieves relevant chunks using `retrieve_relevant_chunks`. - - Constructs a context for the GPT-3.5 model and generates an answer. - -### Example Corpus - -The corpus consists of descriptions of various fictional entities in 'ZenML World', such as: -- **Plasma Phoenixes**: Creatures of pure energy. -- **Crystalline Crabs**: Creatures found on prismatic shores. - -### Example Queries and Outputs - -1. **Query**: "What are Plasma Phoenixes?" - - **Answer**: Describes Plasma Phoenixes as majestic creatures of pure energy. - -2. **Query**: "What kinds of creatures live on the prismatic shores of ZenML World?" - - **Answer**: Mentions Crystalline Crabs. +- **`preprocess_text(text)`**: + - Converts text to lowercase, removes punctuation, and trims whitespace. -3. **Query**: "What is the capital of Panglossia?" - - **Answer**: States that the capital is not mentioned in the context. +- **`tokenize(text)`**: + - Tokenizes preprocessed text into words. -### Technical Notes +- **`retrieve_relevant_chunks(query, corpus, top_n=2)`**: + - Calculates Jaccard similarity between the query and corpus chunks to find the top `n` relevant chunks. -- The similarity measure used is the Jaccard similarity coefficient, defined as the size of the intersection divided by the size of the union of two sets of tokens. -- The implementation is basic and not optimized for performance; more sophisticated methods (e.g., embeddings) can improve similarity measurement. +- **`answer_question(query, corpus, top_n=2)`**: + - Retrieves relevant chunks and generates an answer using the OpenAI API. -### Code Snippet +### Example Code ```python import os @@ -2162,22 +2214,43 @@ def answer_question(query, corpus, top_n=2): return "I don't have enough information to answer the question." context = "\n".join(relevant_chunks) client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) - return client.chat.completions.create(messages=[{"role": "system", "content": f"Based on the provided context, answer the following question: {query}\n\nContext:\n{context}"}, {"role": "user", "content": query}], model="gpt-3.5-turbo").choices[0].message.content.strip() + response = client.chat.completions.create( + messages=[ + {"role": "system", "content": f"Based on the provided context, answer the following question: {query}\n\nContext:\n{context}"}, + {"role": "user", "content": query}, + ], + model="gpt-3.5-turbo", + ) + return response.choices[0].message.content.strip() + +# Sample corpus +corpus = [ + "The luminescent forests of ZenML World are inhabited by glowing Zenbots.", + "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully.", + "Telepathic Treants communicate through a quantum neural network.", + "Deep within the melodic caverns, Fractal Fungi create a symphony of sounds.", + "Holographic Hummingbirds hover near ethereal waterfalls.", + "Gravitational Geckos traverse inverted cliffs.", + "Plasma Phoenixes soar above the chromatic canyons.", + "Crystalline Crabs scuttle along the prismatic shores." +] -# Example corpus -corpus = [preprocess_text(sentence) for sentence in [ - "The luminescent forests of ZenML World are inhabited by glowing Zenbots...", - "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully...", - # Additional sentences... -]] +corpus = [preprocess_text(sentence) for sentence in corpus] # Example queries -print(answer_question("What are Plasma Phoenixes?", corpus)) -print(answer_question("What kinds of creatures live on the prismatic shores of ZenML World?", corpus)) -print(answer_question("What is the capital of Panglossia?", corpus)) +questions = [ + "What are Plasma Phoenixes?", + "What kinds of creatures live on the prismatic shores of ZenML World?", + "What is the capital of Panglossia?" +] + +for question in questions: + print(f"Question: {question}") + print(f"Answer: {answer_question(question, corpus)}") ``` -This summary captures the essential components of the RAG pipeline implementation while maintaining critical technical details and code accuracy. +### Output Example +The implementation generates answers based on the provided context, demonstrating the basic functionality of the RAG pipeline. The similarity check is simplistic, using Jaccard similarity, which can be improved with more advanced techniques in future iterations. ================================================== @@ -2185,14 +2258,15 @@ This summary captures the essential components of the RAG pipeline implementatio ### Generating Embeddings for Retrieval -This section details how to generate embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data in a high-dimensional space, allowing for improved retrieval of relevant information based on similarity rather than simple keyword matching. +This section outlines the process of generating embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data in a high-dimensional space, enabling the retrieval of relevant information based on similarity rather than simple keyword matching. -#### Key Points: -- **Embeddings**: Generated using machine learning models (e.g., `sentence-transformers`), embeddings represent data in a high-dimensional space, making similar items closer together. -- **Purpose**: To quickly find relevant data chunks for user queries, improving accuracy over keyword searches. -- **Library**: The `sentence-transformers` library is used to generate embeddings, specifically the model `sentence-transformers/all-MiniLM-L12-v2`, which produces 384-dimensional embeddings. +#### Key Concepts: +- **Embeddings**: High-dimensional vectors that represent data semantically, generated using models like those from the `sentence-transformers` library. +- **Purpose**: To improve retrieval accuracy by capturing the context of the data, allowing for better responses to user queries. + +#### Code Implementation: +The following Python code demonstrates how to generate embeddings for a list of documents: -#### Code for Generating Embeddings: ```python from typing import Annotated, List import numpy as np @@ -2205,18 +2279,23 @@ def generate_embeddings(split_documents: List[Document]) -> Annotated[List[Docum model = SentenceTransformer("sentence-transformers/all-MiniLM-L12-v2") log_artifact_metadata(artifact_name="embeddings", metadata={"embedding_type": "sentence-transformers/all-MiniLM-L12-v2", "embedding_dimensionality": 384}) - embeddings = model.encode([doc.page_content for doc in split_documents]) + document_texts = [doc.page_content for doc in split_documents] + embeddings = model.encode(document_texts) + for doc, embedding in zip(split_documents, embeddings): doc.embedding = embedding return split_documents ``` -#### Visualization of Embeddings: -To visualize the embeddings, dimensionality reduction techniques like t-SNE and UMAP can be employed. This helps in understanding how similar chunks cluster based on their semantic meaning. +- **Model**: The `sentence-transformers/all-MiniLM-L12-v2` model generates embeddings with a dimensionality of 384. +- **Document Model Update**: The `Document` model is updated to include an `embedding` attribute for storing generated embeddings. + +#### Visualization: +To visualize the embeddings, dimensionality reduction techniques like t-SNE and UMAP can be applied: -##### Code for Visualization: ```python +from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt import numpy as np from sklearn.manifold import TSNE @@ -2227,69 +2306,66 @@ artifact = Client().get_artifact_version('EMBEDDINGS_ARTIFACT_UUID') embeddings = np.array([doc.embedding for doc in documents]) parent_sections = [doc.parent_section for doc in documents] +# Color mapping +unique_parent_sections = list(set(parent_sections)) +tol_colors = ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"] +section_color_dict = dict(zip(unique_parent_sections, tol_colors[:len(unique_parent_sections)])) + def visualize(embeddings, parent_sections, method='tsne'): if method == 'tsne': - embeddings_2d = TSNE(n_components=2).fit_transform(embeddings) - else: - embeddings_2d = umap.UMAP(n_components=2).fit_transform(embeddings) + embeddings_2d = TSNE(n_components=2, random_state=42).fit_transform(embeddings) + else: # method == 'umap' + embeddings_2d = umap.UMAP(n_components=2, random_state=42).fit_transform(embeddings) plt.figure(figsize=(8, 8)) - for section in set(parent_sections): + for section in unique_parent_sections: mask = [section == ps for ps in parent_sections] - plt.scatter(embeddings_2d[mask, 0], embeddings_2d[mask, 1], label=section) + plt.scatter(embeddings_2d[mask, 0], embeddings_2d[mask, 1], c=section_color_dict[section], label=section) plt.title(f"{method.upper()} Visualization") plt.legend() plt.show() ``` -### Summary: -- The embeddings generated are stored as artifacts for retrieval purposes. -- The process is modular, allowing for potential future changes in the vector database without needing to regenerate embeddings. -- Visualization of embeddings aids in understanding the data's semantic structure, enhancing retrieval performance. +- **Visualization**: The embeddings can be visualized using either t-SNE or UMAP, allowing for an understanding of how similar chunks are grouped based on their semantic meaning. -For the complete code and further exploration, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +#### Conclusion: +This process generates and visualizes embeddings, which can be stored as artifacts for retrieval in a vector database, enhancing the RAG pipeline's performance. For further details, refer to the complete code in the [GitHub repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/understanding-rag.md === -### Summary of Retrieval-Augmented Generation (RAG) +### Understanding Retrieval-Augmented Generation (RAG) -**Overview:** -Retrieval-Augmented Generation (RAG) enhances the capabilities of large language models (LLMs) by integrating a retrieval mechanism that fetches relevant documents from a large corpus to generate responses. This technique addresses LLM limitations, such as generating incorrect responses and handling large amounts of text. +**Overview**: RAG enhances LLM capabilities by integrating a retrieval mechanism to fetch relevant documents from a large corpus, addressing LLM limitations such as incorrect responses and token constraints. Proposed by Facebook in 2020, RAG combines retrieval and generation strengths, making it effective for tasks like question answering, summarization, and dialogue generation. -**RAG Pipeline:** +#### RAG Pipeline Process 1. **Retriever**: Identifies relevant documents from a corpus. -2. **Generator**: Produces responses based on the retrieved documents. - -**Benefits of RAG:** -- **Contextual Understanding**: Improves response accuracy by grounding outputs in relevant information. -- **Token Efficiency**: Focuses on a smaller set of documents, alleviating token limitations. -- **Cost-Effectiveness**: Reduces computational costs compared to pure generation-based approaches. +2. **Generator**: Produces responses based on retrieved documents. + - **Benefits**: + - Reduces incorrect responses by grounding answers in relevant information. + - Mitigates token limitations by focusing on a smaller document set. + - Cost-effective by optimizing resource usage. -**Use Cases:** -RAG is ideal for tasks requiring long-form responses and contextual understanding, such as: -- Question answering -- Summarization -- Dialogue generation - -It is a practical starting point for working with LLMs due to its lower data and resource requirements. +#### When to Use RAG +- Ideal for generating long-form responses requiring contextual understanding. +- Suitable for tasks needing grounded information. +- A practical starting point for exploring LLMs due to lower data and resource requirements. -**Integration with ZenML:** -ZenML facilitates the creation of RAG pipelines, providing tools for: -- Data ingestion -- Index management -- Artifact tracking +#### RAG in the ZenML Ecosystem +- ZenML facilitates RAG pipeline setup, integrating retrieval and generation models. +- Offers tools for data ingestion, index management, and artifact tracking. +- Supports scaling to complex setups, including fine-tuning and document reranking. -**Key Features of ZenML for RAG:** -- **Reproducibility**: Easily rerun pipelines and preserve previous artifact versions. -- **Scalability**: Deploy on cloud platforms to manage larger document corpora. -- **Artifact Tracking**: Monitor pipeline performance and debug issues through a dashboard. -- **Maintainability**: Modular pipeline structure allows for easy updates and experimentation. -- **Collaboration**: Share pipelines and insights with team members. +**Advantages of ZenML**: +- **Reproducibility**: Easily rerun pipelines with preserved artifact versions for performance comparison. +- **Scalability**: Handle larger document corpora via cloud deployment and scalable vector stores. +- **Artifact Tracking**: Monitor and debug pipeline performance with associated metadata. +- **Maintainability**: Modular pipeline format allows easy updates and experimentation. +- **Collaboration**: Share pipelines and insights with team members using the ZenML dashboard. -**Next Steps:** -The documentation will cover basic RAG pipeline components and advanced topics like document reranking, embedding finetuning, and LLM finetuning. +### Summary +RAG is a powerful technique for enhancing LLMs by combining retrieval and generation, making it suitable for various applications. ZenML provides a robust framework for implementing and managing RAG pipelines, ensuring reproducibility, scalability, and collaboration. ================================================== @@ -2297,20 +2373,19 @@ The documentation will cover basic RAG pipeline components and advanced topics l ### RAG Pipelines with ZenML -Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models, enhancing the capabilities of Large Language Models (LLMs). This guide outlines how to set up RAG pipelines using ZenML, focusing on key components such as data ingestion, index store management, and tracking artifacts. - -#### Key Points: - -- **Purpose of RAG**: RAG addresses the limitations of LLMs, which can generate incorrect responses, especially with ambiguous prompts, and have constraints on text length. While some LLMs, like Google's Gemini 1.5 Pro, can handle up to 1 million tokens, many open-source models manage significantly less. +**Overview**: Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models to enhance the capabilities of Large Language Models (LLMs). This guide outlines the setup of RAG pipelines using ZenML, covering essential components such as data ingestion, index management, and artifact tracking. -- **Components of RAG Pipelines**: - 1. **Data Ingestion and Preprocessing**: Essential for preparing the data used in RAG. - 2. **Embeddings**: Represent data for retrieval; embeddings are crucial for the retrieval mechanism. - 3. **Vector Database**: Stores embeddings for efficient retrieval. - 4. **Artifact Tracking**: ZenML facilitates tracking of RAG-related artifacts. +**Key Points**: +- **LLM Limitations**: LLMs can generate human-like responses but may produce incorrect or inappropriate outputs, especially with ambiguous prompts. Most LLMs have token limits, with many handling significantly less than 1 million tokens, unlike some advanced models like Google's Gemini 1.5 Pro. + +- **RAG Pipeline Components**: + 1. **Purpose of RAG**: Addresses the limitations of LLMs by integrating retrieval mechanisms. + 2. **Data Ingestion**: Process of collecting and preparing data for the pipeline. + 3. **Embeddings**: Utilization of embeddings to represent data, forming the basis for retrieval. + 4. **Vector Database**: Storage of embeddings in a vector database for efficient retrieval. + 5. **Artifact Tracking**: Use ZenML to track artifacts associated with the RAG process. -#### Conclusion: -The guide culminates in a demonstration of how these components integrate to perform basic RAG inference, showcasing the full functionality of the pipeline. +**Conclusion**: The guide culminates in demonstrating how these components work together to execute basic RAG inference. ================================================== @@ -2318,16 +2393,13 @@ The guide culminates in a demonstration of how these components integrate to per ### Implementing Reranking in ZenML -This documentation outlines how to integrate a reranking component into an existing RAG (Retrieval-Augmented Generation) pipeline using the `rerankers` package. The reranker reorders retrieved documents based on their relevance to a given query. +This documentation outlines the integration of a reranker into an existing RAG (Retrieval-Augmented Generation) pipeline using the [`rerankers`](https://github.com/AnswerDotAI/rerankers/) package. The reranker reorders retrieved documents based on their relevance to a query. -#### Key Points - -- **Rerankers Package**: A lightweight dependency that provides an interface for various reranking models. It includes an abstract `Reranker` class for custom implementations and offers pre-built models. - -- **Reranking Process**: The reranker takes a query and a list of documents, returning a reordered list based on reranking scores. - -#### Example Code +#### Reranker Overview +- The `Reranker` abstract class allows the creation of custom rerankers or the use of pre-built models. +- It takes a query and a list of documents, returning a reordered list based on reranking scores. +#### Example Code for Reranking ```python from rerankers import Reranker @@ -2342,42 +2414,21 @@ texts = [ ] results = ranker.rank(query="What's your favorite sport?", docs=texts) ``` +The output will reorder documents, prioritizing those relevant to the query. -**Output Example**: -```python -RankedResults( - results=[ - Result(doc_id=5, text='I like to play basketball', score=-0.465, rank=1), - Result(doc_id=0, text='I like to play soccer', score=-0.735, rank=2), - ... - ], - query="What's your favorite sport?", - has_scores=True -) -``` - -- The reranker prioritizes documents related to sports over less relevant topics. - -#### Rerank Function - -A helper function to rerank documents based on a query: - +#### Reranking Function +A helper function can be added to rerank documents: ```python def rerank_documents(query: str, documents: List[Tuple], reranker_model: str = "flashrank") -> List[Tuple[str, str]]: ranker = Reranker(reranker_model) docs_texts = [f"{doc[0]} PARENT SECTION: {doc[2]}" for doc in documents] results = ranker.rank(query=query, docs=docs_texts) - return [(results.results[i].text, documents[results.results[i].doc_id][1]) for i in range(len(results.results))] ``` +This function returns a list of tuples containing reranked document text and original URLs. -- **Input**: A query and a list of documents (tuples of content and URL). -- **Output**: A list of tuples with reranked document text and original URLs. - -#### Query Function - -A function to query similar documents with optional reranking: - +#### Querying Similar Documents +The reranking function is utilized in a querying function: ```python def query_similar_docs(question: str, url_ending: str, use_reranking: bool = False, returned_sample_size: int = 5) -> Tuple[str, str, List[str]]: embedded_question = get_embeddings(question) @@ -2393,36 +2444,42 @@ def query_similar_docs(question: str, url_ending: str, use_reranking: bool = Fal return (question, url_ending, urls) ``` +This function retrieves similar documents based on a question, optionally reranking them before returning the top URLs. -- **Functionality**: Retrieves embeddings for a question, connects to a database, and retrieves similar documents. If reranking is enabled, it reranks the top documents and returns the top five URLs. - -#### Evaluation - -The performance of the reranker can be evaluated to assess its impact on the quality of retrieved documents. For full code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [eval_retrieval.py file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py). +#### Further Exploration +For complete code and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performance.md === -### Evaluating Reranking Performance with ZenML +### Evaluating Reranking Performance -This documentation outlines how to evaluate the performance of a reranking model using ZenML. The evaluation process involves comparing retrieval performance before and after reranking using established metrics. +This documentation outlines how to evaluate the performance of a reranking model using ZenML, focusing on comparing retrieval performance before and after reranking. -#### Key Steps in Evaluation +#### Evaluation Process -1. **Retrieval Evaluation Function**: - The `perform_retrieval_evaluation` function evaluates retrieval performance based on generated questions and relevant documents. It checks if the expected URL is present in the retrieved results and calculates the failure rate. +1. **Retrieval Evaluation**: The evaluation begins by comparing retrieval performance using a set of queries and relevant documents. The following code snippet implements the evaluation: ```python def perform_retrieval_evaluation(sample_size: int, use_reranking: bool) -> float: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) - failures = sum(1 for item in sampled_dataset if not any(item["filename"].split("/")[-1] in url for url in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1], use_reranking)[2])) - return round((failures / len(sampled_dataset)) * 100, 2) + + failures = sum( + 1 for item in sampled_dataset if not any( + item["filename"].split("/")[-1] in url for url in query_similar_docs( + item["generated_questions"][0], item["filename"].split("/")[-1], use_reranking)[2] + ) + ) + + failure_rate = (failures / len(sampled_dataset)) * 100 + return round(failure_rate, 2) ``` -2. **Evaluation Steps**: - Two separate steps execute the retrieval evaluation: one without reranking and one with reranking. + This function evaluates retrieval performance by checking if the expected URL ending is present in the retrieved URLs, returning the failure rate. + +2. **Evaluation Steps**: Two separate steps are defined for evaluation: ```python @step @@ -2434,107 +2491,110 @@ This documentation outlines how to evaluate the performance of a reranking model return perform_retrieval_evaluation(sample_size, use_reranking=True) ``` -3. **Logging Failures**: - The logs provide insights into specific failures, helping identify anomalies in generated questions. + These steps log and return the failure rates for retrieval systems with and without reranking. -4. **Visualizing Results**: - The `visualize_evaluation_results` function creates a bar chart of various evaluation metrics, including failure rates and scores for different aspects of the retrieval system. +3. **Logging Failures**: Specific failure examples can be viewed in the logs, which help identify issues with generated questions. - ```python - @step(enable_cache=False) - def visualize_evaluation_results(...): - scores = [...] # Normalized scores - labels = [...] # Corresponding labels - ax.barh(np.arange(len(labels)), scores) - plt.savefig(io.BytesIO(), format="png") - ``` +#### Visualizing Reranking Performance -5. **Running the Evaluation Pipeline**: - To run the evaluation pipeline, clone the project repository and execute the evaluation command after running the main pipeline. +ZenML allows visualization of evaluation results. The following code creates a bar chart of various evaluation metrics: - ```bash - git clone https://github.com/zenml-io/zenml-projects.git - cd llm-complete-guide - python run.py --evaluation - ``` +```python +@step(enable_cache=False) +def visualize_evaluation_results(...): + scores = [score / 20 for score in [small_retrieval_eval_failure_rate, ...]] + labels = ["Small Retrieval Eval Failure Rate", ...] + + fig, ax = plt.subplots(figsize=(10, 6)) + ax.barh(np.arange(len(labels)), scores, align="center") + ax.set_yticks(np.arange(len(labels))) + ax.set_yticklabels(labels) + ax.set_title(f"Evaluation Metrics for {step_context.pipeline_run.name}") + plt.tight_layout() + plt.savefig(buf, format="png") + return Image.open(buf) +``` -#### Conclusion -The evaluation process helps identify the effectiveness of the reranking model and highlights areas for improvement, such as the underlying retrieval model. Visualizations in the ZenML dashboard facilitate a clear understanding of performance metrics. +This function normalizes scores and generates a horizontal bar chart to visualize the evaluation metrics. + +#### Running the Evaluation Pipeline + +To run the evaluation pipeline, clone the project repository and execute the evaluation command: + +```bash +git clone https://github.com/zenml-io/zenml-projects.git +cd llm-complete-guide +python run.py --evaluation +``` + +This will execute the evaluation pipeline and display results on the dashboard. + +### Conclusion + +The documentation provides a comprehensive guide to evaluating reranking models in ZenML, including performance comparison, logging, visualization, and execution of the evaluation pipeline. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/reranking.md === -**Summary: Adding Reranking to RAG Inference in ZenML** +### Summary: Adding Reranking to RAG Inference in ZenML -Rerankers enhance retrieval systems using LLMs by reordering retrieved documents based on additional features or scores, improving their quality and relevance. This section outlines how to integrate a reranker into the RAG inference pipeline in ZenML. +Rerankers enhance retrieval systems using LLMs by reordering retrieved documents based on additional features or scores, improving their quality. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. -### Key Points: -- **Purpose of Rerankers**: They optimize the order of retrieved documents, potentially enhancing the performance of the retrieval system and the quality of LLM responses. -- **Workflow Context**: The reranker is an optional component added to an established workflow that includes data ingestion, preprocessing, embeddings generation, and retrieval. -- **Evaluation Metrics**: Basic metrics are set up to assess the retrieval system's performance. +#### Key Points: +- **Reranking Purpose**: Increases relevance and quality of retrieved documents, leading to better LLM responses. +- **Workflow Context**: Reranking is an optional enhancement to an existing workflow that includes data ingestion, preprocessing, embeddings generation, and retrieval. +- **Evaluation Metrics**: Basic metrics are established to assess retrieval performance. -### Visual Aid: -- A diagram illustrates the reranking process within the overall workflow, emphasizing its optional nature and potential benefits. +#### Visual Reference: +- A workflow diagram illustrates the reranking process within the overall system. -Incorporating a reranker can lead to improved document relevance and better LLM responses. +By implementing a reranker, users can optimize their retrieval systems for enhanced performance. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md === -### Reranking Overview +## Reranking Overview -**Definition**: Reranking refines the initial ranking of documents retrieved by a system, crucial for enhancing relevance in Retrieval-Augmented Generation (RAG). The initial retrieval typically employs sparse methods like BM25 or TF-IDF, which focus on lexical matching but may miss semantic context. - -**Function**: Rerankers reorder documents by considering additional features such as semantic similarity and relevance scores, ensuring the most informative documents are prioritized for generating accurate outputs. +### What is Reranking? +Reranking refines the initial ranking of documents retrieved by a system, particularly in Retrieval-Augmented Generation (RAG). The initial retrieval often uses sparse methods like BM25 or TF-IDF, which may not effectively capture semantic meaning. Rerankers reorder documents by considering features such as semantic similarity and relevance scores, ensuring the LLM accesses the most relevant context for generating responses. ### Types of Rerankers +1. **Cross-Encoders**: Combine the query and document as input to produce a relevance score. They effectively capture interactions but are computationally intensive (e.g., BERT-based models). + +2. **Bi-Encoders**: Use separate encoders for the query and document, generating independent embeddings and computing similarity. They are more efficient but less effective at capturing interactions. -1. **Cross-Encoders**: - - Input: Concatenated query and document. - - Output: Relevance score. - - Example: BERT-based models. - - **Pros**: Effective interaction capture. - - **Cons**: Computationally intensive. - -2. **Bi-Encoders**: - - Input: Separate encoders for query and document. - - Output: Similarity score from independent embeddings. - - **Pros**: More efficient. - - **Cons**: Weaker interaction capture. - -3. **Lightweight Models**: - - Examples: Distilled models, small transformer variants. - - **Pros**: Faster, smaller footprint, suitable for real-time use. +3. **Lightweight Models**: Smaller, faster models (e.g., distilled versions) balance effectiveness and efficiency, suitable for real-time applications. ### Benefits of Reranking in RAG +1. **Improved Relevance**: Identifies the most relevant documents for a query, enhancing the LLM's context. + +2. **Semantic Understanding**: Captures semantic meaning beyond keyword matching, retrieving semantically similar documents. + +3. **Domain Adaptation**: Fine-tuned on specific data to incorporate domain knowledge, improving performance in targeted industries. -1. **Improved Relevance**: Identifies the most relevant documents for queries. -2. **Semantic Understanding**: Captures semantic meaning beyond keyword matching. -3. **Domain Adaptation**: Fine-tuned on domain-specific data for better performance. -4. **Personalization**: Tailors document retrieval based on user preferences and interactions. +4. **Personalization**: Tailors document retrieval based on user preferences and historical interactions. -### Next Steps -The documentation will cover implementing reranking in ZenML and its integration into the RAG inference pipeline. +### Implementation +The next section will cover how to implement reranking in ZenML and integrate it into the RAG inference pipeline. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/README.md === -### Summary: Adding Reranking to RAG Inference in ZenML +**Summary: Adding Reranking to RAG Inference in ZenML** -Rerankers enhance retrieval systems using LLMs by reordering retrieved documents based on additional features or scores, improving the quality of results. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. +Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. -#### Key Points: -- **Purpose of Rerankers**: They optimize the relevance and quality of retrieved documents, leading to improved LLM responses. -- **Workflow Context**: The reranker is an optional enhancement within an established workflow that includes data ingestion, preprocessing, embedding generation, and retrieval. -- **Evaluation Metrics**: Basic metrics are set up to assess retrieval performance. +Previously, the workflow was established, covering data ingestion, preprocessing, embeddings generation, and retrieval, along with basic evaluation metrics for performance assessment. Reranking is an optional enhancement that can boost the relevance and quality of retrieved documents, leading to improved LLM responses. -#### Visual Reference: -- The reranking process is depicted in a workflow diagram, illustrating its role as an enhancement rather than a necessity. +**Key Points:** +- Rerankers reorder retrieved documents based on additional features/scores. +- Integration of reranking is optional but beneficial for retrieval performance. +- Enhances the overall effectiveness of the LLM's responses. -This integration can significantly boost the performance of your retrieval system. +![Reranking Workflow](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== @@ -2543,25 +2603,25 @@ This integration can significantly boost the performance of your retrieval syste ### Summary of Generation Evaluation in RAG Pipeline #### Overview -The generation component of a Retrieval-Augmented Generation (RAG) pipeline is responsible for producing answers based on retrieved context. Evaluating this component is subjective and challenging, but several methods can be employed. +The generation component of a Retrieval-Augmented Generation (RAG) pipeline generates answers based on retrieved context. Evaluating this component is subjective and lacks precise metrics, but several methods can be employed. #### Handcrafted Evaluation Tests -- Create a set of test cases to check for specific terms in generated outputs. -- Example: For questions about supported orchestrators, ensure terms like "Airflow" and "Kubeflow" are included, while "Flyte" and "Prefect" are excluded. +- Create examples to verify that generated outputs include or exclude specific terms based on known context. +- For instance, when asking about supported orchestrators, ensure terms like "Airflow" and "Kubeflow" are included, while "Flyte" and "Prefect" are excluded. - Start with simple tests and expand as needed, focusing on common mistakes observed in outputs. **Example Tables:** -- **Bad Answers Table:** +- **Bad Answers:** | Question | Bad Words | |----------|-----------| | What orchestrators does ZenML support? | AWS Step Functions, Flyte, Prefect, Dagster | -- **Good Responses Table:** +- **Good Responses:** | Question | Good Words | |----------|------------| | What are the supported orchestrators in ZenML? | Kubeflow, Airflow | -**Test Code Example:** +**Testing Code Example:** ```python class TestResult(BaseModel): success: bool @@ -2579,27 +2639,20 @@ def test_content_for_bad_words(item: dict, n_items_retrieved: int = 5) -> TestRe return TestResult(success=True, question=question, response=response) ``` -#### Running Tests -Use a test runner to execute tests and log results: +#### End-to-End Evaluation +Combine tests to assess the generation component: ```python -def run_tests(test_data: list, test_function: Callable) -> float: - failures = 0 - total_tests = len(test_data) - for item in test_data: - test_result = test_function(item) - if not test_result.success: - logging.error(f"Test failed for question: '{test_result.question}'. Found word: '{test_result.keyword}'. Response: '{test_result.response}'") - failures += 1 - failure_rate = (failures / total_tests) * 100 - logging.info(f"Total tests: {total_tests}. Failures: {failures}. Failure rate: {failure_rate}%") - return round(failure_rate, 2) +@step +def e2e_evaluation() -> Tuple[float, float, float]: + failure_rate_bad_answers = run_tests(bad_answers, test_content_for_bad_words) + failure_rate_good_responses = run_tests(good_responses, test_content_contains_good_words) + return failure_rate_bad_answers, failure_rate_good_responses ``` #### Automated Evaluation Using Another LLM -- Use a separate LLM to evaluate the quality of generated answers on a scale of 1 to 5 across categories like toxicity, faithfulness, helpfulness, and relevance. -- Set up a Pydantic model to validate scores. +Utilize a separate LLM to evaluate the output quality on a scale from 1 to 5 across categories like toxicity, faithfulness, helpfulness, and relevance. -**Pydantic Model Example:** +**Pydantic Model for Results:** ```python class LLMJudgedTestResult(BaseModel): toxicity: conint(ge=1, le=5) @@ -2608,42 +2661,31 @@ class LLMJudgedTestResult(BaseModel): relevance: conint(ge=1, le=5) ``` -**Test Function Example:** +**LLM Judged Test Function:** ```python def llm_judged_test_e2e(question: str, context: str, n_items_retrieved: int = 5) -> LLMJudgedTestResult: response = process_input_with_retrieval(question, n_items_retrieved) - prompt = f"Analyze the following text and context for scores on toxicity, faithfulness, helpfulness, and relevance." + prompt = f"Analyze the text for scores on toxicity, faithfulness, helpfulness, and relevance." response = completion(model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}]) - json_output = response["choices"][0]["message"]["content"].strip() - return LLMJudgedTestResult(**json.loads(json_output)) + return LLMJudgedTestResult(**json.loads(response["choices"][0]["message"]["content"].strip())) ``` -#### Running LLM Judged Tests -Execute tests across a dataset and calculate average scores: +**Running Tests:** ```python def run_llm_judged_tests(test_function: Callable, sample_size: int = 50) -> Tuple[float, float, float, float]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) - total_scores = [0, 0, 0, 0] - total_tests = len(dataset) - - for item in dataset: - question = item["generated_questions"][0] - context = item["page_content"] - result = test_function(question, context) - total_scores = [total + getattr(result, attr) for total, attr in zip(total_scores, ['toxicity', 'faithfulness', 'helpfulness', 'relevance'])] - - return tuple(round(score / total_tests, 3) for score in total_scores) + # Accumulate scores and calculate averages + return (average_toxicity_score, average_faithfulness_score, average_helpfulness_score, average_relevance_score) ``` #### Considerations for Improvement -- Implement retries for JSON parsing errors. -- Use OpenAI's JSON mode for consistent output. -- Explore batch processing and more sophisticated evaluation methods. -- Increase sample size for better accuracy. -- Consider using multiple LLMs for evaluation. +- Implement retries for JSON output errors. +- Utilize OpenAI's JSON mode for consistent output formatting. +- Explore batch processing and increase sample sizes for more robust evaluations. +- Consider integrating frameworks like `ragas`, `trulens`, and others for enhanced evaluation capabilities. -#### Conclusion -This evaluation framework allows tracking improvements in both retrieval and generation components of the RAG pipeline. For full code and additional resources, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/). +### Conclusion +This evaluation framework for the generation component of a RAG pipeline provides a structured approach to assess output quality, enabling continuous improvement and optimization tailored to specific use cases. For complete code, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py). ================================================== @@ -2651,47 +2693,40 @@ This evaluation framework allows tracking improvements in both retrieval and gen ### Evaluation in 65 Lines of Code -This section demonstrates how to evaluate the performance of a Retrieval-Augmented Generation (RAG) pipeline in 65 lines of code, building on a previous example. The full code can be found in the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). The evaluation code depends on earlier RAG pipeline functions. +This section demonstrates how to evaluate a Retrieval-Augmented Generation (RAG) pipeline using 65 lines of code, building on a previous example. The complete code is available in the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). The evaluation relies on functions from the earlier RAG pipeline. #### Evaluation Data The evaluation data consists of questions and their expected answers: - ```python eval_data = [ {"question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots."}, - {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds in the melodic caverns of ZenML World."}, + {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones..."}, {"question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World."}, ] ``` #### Evaluation Functions - -1. **Retrieval Evaluation**: Checks if retrieved chunks contain any words from the expected answer. - +1. **Retrieval Evaluation**: Checks if any retrieved chunks contain words from the expected answer. ```python def evaluate_retrieval(question, expected_answer, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) ``` -2. **Generation Evaluation**: Uses OpenAI's chat completion to assess the quality of the generated answer. - +2. **Generation Evaluation**: Uses OpenAI's API to assess the relevance and accuracy of the generated answer. ```python def evaluate_generation(question, expected_answer, generated_answer): client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( - messages=[ - {"role": "system", "content": "You are an evaluation judge. Given a question, an expected answer, and a generated answer, determine if the generated answer is relevant and accurate. Respond with 'YES' or 'NO'."}, - {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?"} - ], - model="gpt-3.5-turbo", + messages=[{"role": "system", "content": "You are an evaluation judge..."}, + {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}"}], + model="gpt-3.5-turbo" ) return chat_completion.choices[0].message.content.strip().lower() == "yes" ``` #### Evaluation Process -The evaluation process involves iterating through the evaluation data, calculating retrieval and generation scores: - +The evaluation iterates through the `eval_data`, calculating scores for both retrieval and generation: ```python retrieval_scores = [] generation_scores = [] @@ -2708,7 +2743,8 @@ print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") ``` -This example achieves 100% accuracy for both retrieval and generation, demonstrating a basic yet effective evaluation approach for RAG pipelines. Further sections will elaborate on more sophisticated evaluation techniques. +#### Summary +The example demonstrates how to evaluate a RAG pipeline, achieving 100% accuracy for both retrieval and generation. Future sections will provide more advanced implementations of RAG evaluation. ================================================== @@ -2717,28 +2753,25 @@ This example achieves 100% accuracy for both retrieval and generation, demonstra ### Summary of RAG System Evaluation Documentation #### Overview -This documentation outlines the evaluation of Retrieval-Augmented Generation (RAG) systems, emphasizing the separation of embedding generation and evaluation processes. +This documentation outlines how to evaluate the performance of a Retrieval-Augmented Generation (RAG) system, emphasizing the separation of embedding generation and evaluation processes. #### Evaluation Pipeline -- The evaluation is implemented as a separate pipeline that runs after the main embedding generation pipeline. -- This separation allows for focused evaluation without interfering with embedding generation. -- Depending on the use case, evaluations can be integrated into the main pipeline to assess embedding quality for production readiness. +- The evaluation is structured as a separate pipeline that runs after the main pipeline, which generates and populates embeddings. This separation is beneficial for managing concerns effectively. +- Depending on the use case, evaluations can be integrated into the main pipeline to act as a gating mechanism for embedding quality. +- For development, consider using a local LLM judge for quicker iterations, then switch to a cloud LLM (e.g., Anthropic's Claude, OpenAI's GPT-3.5/4) for comprehensive evaluations. -#### LLM Judge Considerations -- For development, consider using a local LLM judge to expedite evaluations, reserving cloud-based LLMs (e.g., Anthropic's Claude, OpenAI's GPT-3.5/4) for comprehensive evaluations. -- Automated evaluations save time but do not replace human oversight; results should be reviewed to ensure expected performance. +#### Automated Evaluation +- Automation can streamline the evaluation process but does not eliminate the need for human oversight. The LLM judge is costly and time-consuming, necessitating human review of results to ensure expected performance. #### Evaluation Frequency -- The frequency and depth of evaluations depend on project constraints and use cases. -- Balance cost and speed: quick tests (e.g., retrieval system) can be run frequently, while more expensive evaluations (e.g., LLM judge) should be less frequent. -- Structure tests accordingly to optimize resource use. +- The frequency and depth of evaluations should be tailored to the specific project constraints. Balance the cost of evaluations with the need for rapid iteration. +- Quick and inexpensive tests (e.g., retrieval system tests) should be run frequently, while more costly evaluations (e.g., LLM judge) can be conducted less often. #### Next Steps -- The documentation hints at further improvements, such as adding a reranker to enhance retrieval performance without retraining embeddings. - -#### Running the Evaluation Pipeline -To run the evaluation pipeline, follow these steps: +- The documentation suggests adding a reranker to enhance retrieval performance without retraining embeddings. +#### Practical Implementation +To run the evaluation pipeline: 1. Clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git @@ -2748,9 +2781,7 @@ To run the evaluation pipeline, follow these steps: ```bash python run.py --evaluation ``` -4. Results will be output to the console, and progress can be monitored via the dashboard. - -This concise guide provides essential information for evaluating RAG systems effectively while maintaining clarity on the processes and tools involved. +This will output results to the console, allowing inspection of progress, logs, and results in the dashboard. ================================================== @@ -2758,15 +2789,20 @@ This concise guide provides essential information for evaluating RAG systems eff ### Retrieval Evaluation in RAG Pipeline -The retrieval component in a Retrieval-Augmented Generation (RAG) pipeline identifies relevant documents to enhance the generation process. Evaluating its performance focuses on the accuracy of semantic searches—how relevant the retrieved documents are to a given query. +The retrieval component in a RAG (Retrieval-Augmented Generation) pipeline is crucial for finding relevant documents to support the generation component. This section outlines methods to evaluate the performance of the retrieval component, focusing on the accuracy of semantic searches. -#### Manual Evaluation -- **Handcrafted Queries**: Create specific queries to verify if the retrieval component can fetch the expected documents. This method, while time-consuming, helps identify edge cases and assess performance. -- **Example Queries**: - - "How do I get going with the Label Studio integration?" - - "How can I write my own custom materializer?" - -- **Implementation**: Encode the query as a vector and query a PostgreSQL database for similar vectors. Check if the expected document URL appears in the top `n` results. +#### Manual Evaluation with Handcrafted Queries +Manual evaluation involves creating specific queries to check if the retrieval component can find the relevant documents. This method, while time-consuming, helps identify edge cases. Example queries include: + +| Question | URL Ending | +|----------|------------| +| How do I get going with the Label Studio integration? | stacks-and-components/component-guide/annotators/label-studio | +| How can I write my own custom materializer? | user-guide/advanced-guide/data-management/handle-custom-data-types | +| How do I generate embeddings in a RAG pipeline with ZenML? | user-guide/llmops-guide/rag-with-zenml/embeddings-generation | +| How do I use failure hooks in my ZenML pipeline? | user-guide/advanced-guide/pipelining-features/use-failure-success-hooks | +| Can I deploy ZenML self-hosted with Helm? | deploying-zenml/zenml-self-hosted/deploy-with-helm | + +The retrieval process involves encoding the query and querying a PostgreSQL database for similar vectors. The following code implements this: ```python def query_similar_docs(question: str, url_ending: str) -> tuple: @@ -2779,20 +2815,10 @@ def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: return round((failures / len(question_doc_pairs)) * 100, 2) ``` -- **Logging**: Implement logging for feedback during local runs. +#### Automated Evaluation with Synthetic Queries +For broader evaluation, synthetic queries can be generated using an LLM. Each document chunk's text is passed to the LLM to create relevant questions. The generated questions are then used to evaluate the retrieval component. -```python -@step -def retrieval_evaluation_small() -> Annotated[float, "small_failure_rate_retrieval"]: - failure_rate = test_retrieved_docs_retrieve_best_url(question_doc_pairs) - logging.info(f"Retrieval failure rate: {failure_rate}%") - return failure_rate -``` - -#### Automated Evaluation -- **Synthetic Queries**: Use an LLM to generate questions from document chunks, allowing for a broader evaluation. -- **Example Question Generation**: - - Given a document chunk, generate a relevant question to assess retrieval accuracy. +Example question generation code: ```python from typing import List @@ -2801,7 +2827,7 @@ from zenml import step def generate_question(chunk: str, local: bool = False) -> str: model = "ollama/mixtral" if local else "gpt-3.5-turbo" - response = completion(model=model, messages=[{"content": f"Generate a question from this text: `{chunk}`", "role": "user"}]) + response = completion(model=model, messages=[{"content": f"Generate a question about this text: `{chunk}`", "role": "user"}]) return response.choices[0].message.content @step @@ -2811,25 +2837,25 @@ def generate_questions_from_chunks(docs_with_embeddings: List[Document], local: return docs_with_embeddings ``` -- **Evaluation Process**: After generating questions, check if the original document's URL is in the top `n` results. +Once questions are generated, they can be evaluated against the retrieval component: ```python @step def retrieval_evaluation_full(sample_size: int = 50) -> Annotated[float, "full_failure_rate_retrieval"]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) - failures = sum(1 for item in dataset if item["generated_questions"][0] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2]) + failures = sum(1 for item in dataset if item["filename"].split("/")[-1] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2]) return round((failures / len(dataset)) * 100, 2) ``` #### Performance Insights -- Initial tests showed a 20% failure rate for manual queries and 16% for automated queries, indicating room for improvement. -- **Improvement Strategies**: - - Generate more diverse questions. - - Use semantic similarity metrics for nuanced performance evaluation. - - Conduct comparative evaluations of different retrieval approaches. - - Perform error analysis to identify patterns in failures. +Initial tests showed a 20% failure rate with handcrafted queries and 16% with synthetic queries, indicating room for improvement. Suggested enhancements include: + +- **Diverse Question Generation**: Use varied prompts to generate different question types. +- **Semantic Similarity Metrics**: Implement metrics like cosine similarity for nuanced performance evaluation. +- **Comparative Evaluation**: Test different retrieval methods and models. +- **Error Analysis**: Investigate failure cases for targeted improvements. -This structured approach to retrieval evaluation provides a solid foundation for enhancing the RAG pipeline's performance, ensuring that the retrieval component effectively supports the generation of relevant and accurate responses. For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. +The evaluation process provides a baseline understanding of retrieval performance, guiding future enhancements. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [eval_retrieval.py file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py). ================================================== @@ -2837,40 +2863,27 @@ This structured approach to retrieval evaluation provides a solid foundation for ### Evaluation and Metrics for RAG Pipeline -This section focuses on evaluating the performance of a Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating a RAG pipeline is essential for understanding its effectiveness and identifying improvement areas. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to the subjective nature of generated text. Thus, a holistic evaluation approach is necessary. +This section focuses on evaluating the performance of a Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating a RAG pipeline is essential for understanding its effectiveness and identifying improvement areas. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to the subjective nature of generated text. Therefore, a holistic evaluation approach is necessary. #### Key Evaluation Areas: 1. **Retrieval Evaluation**: Assessing the relevance of retrieved documents or document chunks to the query. 2. **Generation Evaluation**: Evaluating the coherence and helpfulness of the generated text for the specific use case. -#### Considerations for Evaluation: -- The evaluation criteria depend on the specific use case and acceptable error levels. For example, in a user-facing chatbot, consider: +#### Evaluation Considerations: +- The evaluation criteria depend on the specific use case and acceptable error tolerance. For example, in a user-facing chatbot, consider: - Relevance of retrieved documents. - Coherence and helpfulness of generated answers. - Presence of harmful language (e.g., hate speech). -#### End-to-End Evaluation: The generation evaluation serves as an end-to-end assessment of the RAG pipeline, allowing for subjective metrics since it evaluates the system's final output. -#### Best Practices: -In a production setting, it's advisable to establish a baseline by evaluating a raw language model (without RAG components) and then compare it to the RAG pipeline's performance to gauge the added value of retrieval and generation components. - -#### Next Steps: -The documentation will provide a high-level code example and detailed guidance on performing evaluations and interpreting results. - -### Code Example (Simplified): -```python -# Example of evaluation functions -def evaluate_retrieval(retrieved_docs, query): - # Check relevance of retrieved documents - pass +#### Practical Guidance: +- It's advisable to establish a baseline by evaluating a raw LLM model (without RAG components) before comparing it to the RAG pipeline's performance. This helps gauge the added value of retrieval and generation components. -def evaluate_generation(generated_text, query): - # Assess coherence and helpfulness - pass -``` +#### Code Example: +A high-level code example demonstrating the two main evaluation areas is available, followed by detailed sections on each evaluation area and practical guidance on execution and result analysis. -This summary captures the essential technical information and key points regarding the evaluation of a RAG pipeline while maintaining clarity and conciseness. +For further details, refer to the sections on [Retrieval Evaluation](retrieval.md) and [Generation Evaluation](generation.md). ================================================== @@ -2878,17 +2891,20 @@ This summary captures the essential technical information and key points regardi ### Cloud Guide Summary -This guide provides instructions for connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is the configuration of tools and infrastructure required for running pipelines. When executing a pipeline, ZenML adapts its actions based on the selected stack. +This section provides straightforward instructions for connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is the configuration of tools and infrastructure necessary for running pipelines. ZenML acts as a translation layer, enabling code execution across different stacks. **Key Points:** -- This guide focuses on **registering** a stack, assuming the necessary resources for pipeline execution have already been **provisioned**. -- Provisioning can be done through: - - Manual setup +- The guide focuses on **registering** a stack, assuming the required resources for pipeline execution are already provisioned. +- To provision infrastructure, options include: + - Manual provisioning - In-browser stack deployment wizard - Stack registration wizard - ZenML Terraform modules -![ZenML is the translation layer that allows your code to run on any of your stacks](../../.gitbook/assets/vpc_zenml.png) +**Visual Aid:** +- An image illustrates ZenML's role in facilitating code execution across stacks. + +This guide does not cover the provisioning process itself but emphasizes the registration of pre-provisioned stacks. ================================================== @@ -2896,55 +2912,60 @@ This guide provides instructions for connecting major public clouds to your ZenM ### Community & Content Overview -The ZenML community offers multiple ways to connect with the development team and enhance understanding of the framework. +The ZenML community offers various ways to connect with the development team and enhance understanding of the framework. -- **Slack Channel**: Join the [ZenML Slack channel](https://zenml.io/slack) for community support, discussions, and project sharing. It's a great resource for finding answers to questions. +#### Slack Channel +- **[Slack channel](https://zenml.io/slack)**: Main hub for community interaction, support, and sharing projects. Many questions may already have answers here. -- **Social Media**: Follow us on [LinkedIn](https://www.linkedin.com/company/zenml) and [Twitter](https://twitter.com/zenml_io) for updates on releases, events, and MLOps. Engage with our posts to help spread the word. +#### Social Media +- **[LinkedIn](https://www.linkedin.com/company/zenml)** and **[Twitter](https://twitter.com/zenml_io)**: Follow for updates on releases, events, and MLOps. Engagement through comments and shares is encouraged. -- **YouTube Channel**: Access tutorials and workshops on our [YouTube channel](https://www.youtube.com/c/ZenML) for visual learning. +#### YouTube Channel +- **[YouTube channel](https://www.youtube.com/c/ZenML)**: Contains video tutorials and workshops for visual learners. -- **Public Roadmap**: Our [public roadmap](https://zenml.io/roadmap) allows users to provide feedback and vote on feature priorities, fostering collaboration between users and developers. +#### Public Roadmap +- **[Public roadmap](https://zenml.io/roadmap)**: Community feedback shapes ZenML's development. Users can suggest and vote on feature ideas. -- **Blog**: Visit our [Blog](https://zenml.io/blog/) for articles on tool implementation, new features, and insights from our team. +#### Blog +- **[Blog](https://zenml.io/blog/)**: Articles from the team covering implementation processes, new features, and insights. -- **Podcast**: Tune into our [Podcast](https://podcast.zenml.io/) for interviews and discussions on machine learning and MLOps with industry leaders. +#### Podcast +- **[Podcast](https://podcast.zenml.io/)**: Features interviews and discussions on machine learning, deep learning, and MLOps. -- **Newsletter**: Subscribe to our [Newsletter](https://zenml.io/newsletter-signup) for updates on open-source tooling and ZenML news. - -This documentation provides essential resources for engaging with ZenML and staying informed about its developments. +#### Newsletter +- **[Newsletter](https://zenml.io/newsletter-signup)**: Subscribe for updates on open-source tooling and ZenML news. ================================================== === File: docs/book/reference/how-do-i.md === -### ZenML Documentation Summary +# ZenML Documentation Summary **Last Updated**: December 13, 2023 -#### Common Questions +## Common Questions -- **Contributing to ZenML**: Refer to the [Contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). For small features or bug fixes, open a pull request. For larger changes, discuss in [Slack](https://zenml.io/slack/) or create an [issue](https://github.com/zenml-io/zenml/issues/new/choose). +- **Contributing to ZenML**: Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for submitting small features or bug fixes via pull requests. For larger contributions, discuss plans on [Slack](https://zenml.io/slack/) or create an [issue](https://github.com/zenml-io/zenml/issues/new/choose). -- **Adding Custom Components**: Start with the [general documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). For specific types, see dedicated sections (e.g., [custom orchestrators](../component-guide/orchestrators/custom.md)). +- **Adding Custom Components**: Start with the [general documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) on custom stack components. For specific types, such as orchestrators, refer to the dedicated section [here](../component-guide/orchestrators/custom.md). -- **Mitigating Dependency Clashes**: Consult the [handling dependencies documentation](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md). +- **Mitigating Dependency Clashes**: Consult the [dedicated documentation](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md) for strategies to handle dependency issues. -- **Deploying Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for stack components provides deployment instructions for popular cloud providers. +- **Deploying Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for each stack component explains deployment on popular cloud providers. -- **Self-hosted ZenML Deployments**: Refer to the documentation on [self-hosted deployments](../getting-started/deploying-zenml/README.md). +- **Deploying ZenML on Internal Clusters**: See the documentation on [self-hosted ZenML deployments](../getting-started/deploying-zenml/README.md) for options. -- **Hyperparameter Tuning**: Learn more in our [hyperparameter tuning guide](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md). +- **Hyperparameter Tuning**: Refer to our [guide](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md) for implementation details. -- **Resetting ZenML Client**: Use `zenml clean` to reset your client and wipe local metadata. This is destructive; consult [Slack](https://zenml.io/slack/) if unsure. +- **Resetting ZenML Client**: Use `zenml clean` to reset your client and wipe the local metadata database. This action is destructive; consult us on [Slack](https://zenml.io/slack/) if unsure. -- **Dynamic Pipelines and Steps**: Read about composing steps and pipelines in the [starter guide](../user-guide/starter-guide/create-an-ml-pipeline.md) and check related code examples in the hyperparameter tuning guide. +- **Dynamic Pipelines and Steps**: Read the [guide on composing steps and pipelines](../user-guide/starter-guide/create-an-ml-pipeline.md) and check code examples in the hyperparameter tuning guide. -- **Using Project Templates**: Project templates facilitate quick setup. The `starter` template is recommended for most use cases. Custom templates can be created in a Git repository. +- **Using Project Templates**: Utilize [Project Templates](../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md) for quick starts. The Starter template (`starter`) is recommended for basic scaffolding. -- **Upgrading ZenML**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, see the [upgrade documentation](../how-to/manage-zenml-server/upgrade-zenml-server.md). +- **Upgrading ZenML**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, refer to the [dedicated section](../how-to/manage-zenml-server/upgrade-zenml-server.md). -- **Using Specific Stack Components**: For details on specific components, refer to the [component guide](../component-guide/README.md). +- **Using Specific Stack Components**: For details on specific components, consult the [component guide](../component-guide/README.md). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) @@ -2954,27 +2975,34 @@ This documentation provides essential resources for engaging with ZenML and stay ### ZenML FAQ Summary -**Purpose of ZenML**: Developed to address challenges in deploying large-scale ML pipelines, ZenML offers a production-ready solution for managing machine learning models. - -**Difference from Orchestrators**: ZenML is not merely an orchestrator like Airflow or Kubeflow; it is a framework that enables running pipelines on various orchestrators while coordinating other ML system components. Users can utilize standard orchestrators or create custom ones for enhanced control. +#### Purpose of ZenML +ZenML was developed to address challenges faced in deploying machine learning models in production, aiming to provide a simple, production-ready solution for large-scale ML pipelines. -**Tool Integration**: For integration with tools, refer to the [documentation](https://docs.zenml.io) and the [component guide](../component-guide/README.md). The ZenML community is actively expanding integrations, and users can suggest features via the [roadmap](https://zenml.io/roadmap) and [discussion forum](https://zenml.io/discussion). +#### ZenML vs. Orchestrators +ZenML is not merely an orchestrator like Airflow or Kubeflow; it is a framework that allows execution of ML pipelines on various orchestrators. Users can utilize standard orchestrators or create custom ones for enhanced control. -**Windows Support**: ZenML officially supports Windows through WSL. Some features may not work outside of WSL. +#### Tool Integration +For integration queries, refer to the [documentation](https://docs.zenml.io) and the [component guide](../component-guide/README.md) for instructions and sample code. The ZenML team is continuously adding new integrations, and users can suggest features via the [roadmap](https://zenml.io/roadmap) and [discussion forum](https://zenml.io/discussion). -**Apple Silicon Support**: ZenML supports Macs with Apple Silicon. Set the environment variable: -```bash -export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES -``` -This is necessary for local server use; it's not needed for CLI usage with a deployed server. +#### OS Support +- **Windows**: Officially supported via WSL; limited functionality outside WSL. +- **Mac (Apple Silicon)**: Supported with the following environment variable: + ```bash + export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES + ``` + This is necessary for local server use but not required for CLI operations with a deployed server. -**Custom Tool Integration**: Guidance for extending ZenML with custom tools is available [here](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +#### Custom Tool Integration +For extending ZenML with custom tools, refer to the guide on [implementing a custom stack component](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). -**Contributing**: Community contributions are welcome. Start with issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). +#### Community Contribution +To contribute, start with issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). -**Community Engagement**: Join the [Slack group](https://zenml.io/slack/) for support and discussions. +#### Community Engagement +Join the [Slack group](https://zenml.io/slack/) for questions and discussions with the ZenML community. -**License**: ZenML is licensed under the Apache License Version 2.0. Full license details are available in [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). Contributions are also licensed under this agreement. +#### Licensing +ZenML is licensed under the Apache License Version 2.0. More details can be found in the [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). Contributions will also fall under this license. ================================================== @@ -2982,22 +3010,22 @@ This is necessary for local server use; it's not needed for CLI usage with a dep # Environment Variables for ZenML -ZenML provides several pre-defined environment variables to control its behavior. Below are the key variables, their default values, and options: +ZenML allows configuration through several pre-defined environment variables. Below are key variables with their default values and options: ## Logging Verbosity -Set the logging level: +Control the logging level: ```bash export ZENML_LOGGING_VERBOSITY=INFO # Options: INFO, WARN, ERROR, CRITICAL, DEBUG ``` ## Disable Step Logs -To prevent storing step logs (which may cause performance issues): +To prevent storing step logs: ```bash export ZENML_DISABLE_STEP_LOGS_STORAGE=true # Set to true to disable ``` ## ZenML Repository Path -Specify the repository location: +Specify the path for ZenML's repository: ```bash export ZENML_REPOSITORY_PATH=/path/to/somewhere ``` @@ -3015,7 +3043,7 @@ export ZENML_DEBUG=true ``` ## Active Stack -Set the active stack using its UUID: +Set the active stack by its UUID: ```bash export ZENML_ACTIVE_STACK_ID= ``` @@ -3027,9 +3055,9 @@ export ZENML_PREVENT_PIPELINE_EXECUTION=true # Set to true to prevent execution ``` ## Disable Rich Traceback -Disable the rich traceback feature: +Disable rich traceback: ```bash -export ZENML_ENABLE_RICH_TRACEBACK=false # Set to false to disable +export ZENML_ENABLE_RICH_TRACEBACK=false ``` ## Disable Colorful Logging @@ -3037,7 +3065,14 @@ To disable colorful logging: ```bash export ZENML_LOGGING_COLORS_DISABLED=true ``` -Note: Disabling on the client environment also affects remote orchestrators. For remote orchestration, set it in the orchestrator's environment. +This setting on the client environment also affects remote orchestrators. To enable it on remote orchestrators while disabling locally, configure as follows: +```python +docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) + +@pipeline(settings={"docker": docker_settings}) +def my_pipeline() -> None: + my_step() +``` ## ZenML Global Config Path Set the path for the global config file: @@ -3045,14 +3080,16 @@ Set the path for the global config file: export ZENML_CONFIG_PATH=/path/to/somewhere ``` +## Server Configuration +Refer to the ZenML Server documentation for server configuration options. + ## Client Configuration -Connect your ZenML Client to a server by setting: +Connect the ZenML Client to a server using: ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` - -For more details on server configuration, refer to the ZenML Server documentation. +This is useful for CI/CD environments or containerized setups. ================================================== @@ -3060,9 +3097,9 @@ For more details on server configuration, refer to the ZenML Server documentatio # ZenML API Reference Summary -The ZenML server operates as a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local usage (via `zenml login --local`), the documentation can be found at `http://127.0.0.1:8237/docs`. +The ZenML server operates as a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local instances (using `zenml login --local`), the documentation is available at `http://127.0.0.1:8237/docs`. -## Accessing the API Programmatically with a Bearer Token +## Accessing the API Programmatically To access the ZenML API programmatically, follow these steps: @@ -3073,25 +3110,25 @@ To access the ZenML API programmatically, follow these steps: This command generates a ``. 2. **Obtain an Access Token**: - Use the `/api/v1/login` endpoint: + Use the `/api/v1/login` endpoint to get an access token: ```shell curl -X 'POST' \ '/api/v1/login' \ -H 'accept: application/json' \ -H 'Content-Type: application/x-www-form-urlencoded' \ - -d 'grant_type=zenml-api-key&username=&password=&client_id=&device_code=' + -d 'grant_type=zenml-api-key&username=&password=' ``` - The response will include: + The response will include an `access_token`: ```json { - "access_token": "", + "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...", "token_type": "bearer", "expires_in": 3600 } ``` -3. **Make API Requests Using the Access Token**: - Example of a GET request: +3. **Make API Requests**: + Use the access token for subsequent requests: ```shell curl -X 'GET' \ '/api/v1/pipelines?hydrate=false&name=training' \ @@ -3099,7 +3136,7 @@ To access the ZenML API programmatically, follow these steps: -H 'Authorization: Bearer ' ``` -This summary retains critical technical details while ensuring clarity and conciseness. +This summary retains all critical steps and commands necessary for programmatic access to the ZenML API. ================================================== @@ -3107,7 +3144,7 @@ This summary retains critical technical details while ensuring clarity and conci ### ZenML Python Client Overview -The ZenML Python `Client` enables programmatic interaction with ZenML resources such as pipelines, runs, and stacks, which are stored in a database. For other programming environments, ZenML resources can be accessed via REST API endpoints. +The ZenML Python `Client` enables programmatic interaction with ZenML resources, such as pipelines, runs, and stacks, stored in a database within your ZenML instance. For other programming environments, ZenML resources can be accessed via REST API endpoints. ### Usage Example @@ -3129,22 +3166,31 @@ for pipeline_run in my_runs_on_current_stack: print(pipeline_run.name) ``` -### Key ZenML Resources +### Main ZenML Resources - **Pipelines**: Tracked pipelines. - **Pipeline Runs**: Details of executed runs. - **Run Templates**: Templates for running pipelines. -- **Step Runs**: Steps within pipeline runs. -- **Artifacts**: Data written to artifact stores. +- **Step Runs**: Steps of pipeline runs. +- **Artifacts**: Information on artifacts from runs. - **Schedules**: Metadata for scheduled runs. - **Builds**: Docker images for pipelines. - **Code Repositories**: Connected git repositories. +#### Stacks and Authentication + +- **Stack**: Registered stacks. +- **Stack Components**: Components like orchestrators and artifact stores. +- **Flavors**: Available flavors for stack components. +- **User**: Registered users. +- **Secrets**: Authentication secrets in the ZenML Secret Store. +- **Service Connectors**: Connectors for infrastructure. + ### Client Methods -#### List Methods +#### Reading and Writing Resources -To list resources: +**List Methods**: Retrieve lists of resources. ```python client.list_pipeline_runs( @@ -3155,85 +3201,73 @@ client.list_pipeline_runs( ) ``` -These methods return a `Page` of resources, defaulting to 50 results. You can adjust the `size` and `page` parameters for pagination and filtering. - -#### Get Methods +These methods return a `Page` of resources, defaulting to 50 results. You can adjust the `size` or use the `page` argument for pagination. -To fetch a specific resource: +**Get Methods**: Fetch specific resources by ID, name, or name prefix. ```python client.get_pipeline_run("413cfb42-a52c-4bf1-a2fd-78af2f7f0101") # By ID client.get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") # By Name -client.get_pipeline_run("first_pipeline-2023_06_20-16") # By Name Prefix ``` -#### Create, Update, and Delete Methods - -Available for certain resources; check the Client SDK documentation for specifics. +**Create, Update, and Delete Methods**: Available for certain resources; check the Client SDK documentation for specifics. #### Active User and Stack -To access the current user and stack: +Access current user and stack information via: ```python -my_runs_on_current_stack = client.list_pipeline_runs( - stack_id=client.active_stack_model.id, - user_id=client.active_user.id, -) +client.active_user +client.active_stack_model ``` ### Resource Models -All methods return **Response Models** (Pydantic Models) ensuring data validation. For example, `client.list_pipeline_runs` returns `Page[PipelineRunResponseModel]`. +ZenML Client methods return **Response Models**, which are Pydantic Models ensuring data integrity. For example, `client.list_pipeline_runs` returns a `Page[PipelineRunResponseModel]`. + +**Request, Update, and Filter Models** are used for server API endpoints but not for Client methods. For detailed model fields, refer to the ZenML Models SDK Documentation. -**Request Models**, **Update Models**, and **Filter Models** are used for server API endpoints but not for Client methods. For detailed model fields, refer to the ZenML Models SDK Documentation. +### Important Links +- [Client SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/) +- [ZenML Models SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-models/) ================================================== === File: docs/book/reference/global-settings.md === -# ZenML Global Settings Overview +### ZenML Global Settings Overview -## Global Config Directory -ZenML's global settings are stored in the **ZenML Global Config Directory**, typically located at: -- **Linux:** `~/.config/zenml` -- **Mac:** `~/Library/Application Support/zenml` -- **Windows:** `C:\Users\%USERNAME%\AppData\Local\zenml` +**ZenML Global Config Directory**: The global settings for ZenML are stored in a directory specific to the operating system: -The default path can be overridden by setting the `ZENML_CONFIG_PATH` environment variable. To check the current config directory, use: +- **Linux**: `~/.config/zenml` +- **Mac**: `~/Library/Application Support/zenml` +- **Windows**: `C:\Users\%USERNAME%\AppData\Local\zenml` + +The default path can be changed using the `ZENML_CONFIG_PATH` environment variable. To check the current config directory, use: ```shell zenml status python -c 'from zenml.utils.io_utils import get_global_config_directory; print(get_global_config_directory())' ``` -**Warning:** Avoid manually altering files in this directory. Use CLI commands for management: -- `zenml analytics` - Manage analytics settings -- `zenml clean` - Reset to default configuration -- `zenml downgrade` - Downgrade ZenML version in global config - -## Initialization -Upon first run, ZenML creates the global config directory and initializes it with a default configuration and stack. Example output: +**Warning**: Avoid manually altering files in the global config directory. Use CLI commands for management: +- `zenml analytics` - Manage analytics settings. +- `zenml clean` - Reset to default configuration. +- `zenml downgrade` - Match global config version to installed ZenML version. -``` -Initializing the ZenML global configuration version to 0.13.2 -Creating default user 'default' ... -Creating default stack for user 'default'... -``` +**Initialization**: The first run of ZenML creates the global config directory and initializes it with a default configuration and stack. -### Directory Structure -After initialization, the directory layout includes: +**Global Config Directory Structure**: ``` -/home/stefan/.config/zenml -├── config.yaml <- Global Configuration Settings -└── local_stores <- Local data storage for stack components - ├── <- Local Store paths for components +/home/user/.config/zenml +├── config.yaml # Global Configuration Settings +└── local_stores # Local data storage for stack components + ├── # Local Store paths └── default_zen_store - └── zenml.db <- SQLite database for ZenML data + └── zenml.db # SQLite database for ZenML data ``` -### `config.yaml` Contents -The `config.yaml` file contains: +**`config.yaml` Contents**: ```yaml active_stack_id: ... analytics_opt_in: true @@ -3241,40 +3275,28 @@ store: database: ... url: ... username: ... -user_id: d980f13e-05d1-4765-92d2-1dc7eb7addb7 +user_id: version: 0.13.2 ``` -## Usage Analytics -ZenML collects **anonymized** usage statistics to improve the tool. Users can opt-out with: +**Local Stores**: Contains subdirectories for local stack components, such as artifact stores. +**Usage Analytics**: ZenML collects anonymized usage statistics to improve the tool. Users can opt-out with: ```bash zenml analytics opt-out ``` -### Analytics Implementation -Data is aggregated using [Segment](https://segment.com) and processed through a ZenML analytics server to optimize tracking. The client code is available in the [`analytics`](https://github.com/zenml-io/zenml/tree/main/src/zenml/analytics) module. - -## Version Mismatch (Downgrading) -If you downgrade ZenML and encounter a version mismatch error: - -```shell -`The ZenML global configuration version (%s) is higher than the version of ZenML currently being used (%s).` -``` - -To align the global configuration version with the installed version, run: - +**Version Mismatch Handling**: If the global configuration version exceeds the installed version, an error occurs. To downgrade the configuration, use: ```shell zenml downgrade ``` -**Warning:** Downgrading may lead to unexpected behavior. If issues arise, reset the configuration with: - +**Warning**: Downgrading may lead to unexpected behavior. To reset the configuration, run: ```shell zenml clean ``` -This documentation provides essential details about ZenML's global settings, configuration management, and analytics usage, ensuring effective use and troubleshooting of the tool. +For further details on analytics and data privacy, users can contact ZenML support. ================================================== @@ -3282,17 +3304,11 @@ This documentation provides essential details about ZenML's global settings, con ### Overview of ZenML Integrations -ZenML enhances MLOps pipelines by integrating with various tools across different categories, allowing users to streamline their ML workflows. Key integrations include: - -- **Orchestrators**: Airflow, Kubeflow -- **Experiment Trackers**: MLflow Tracking, Weights & Biases -- **Model Deployers**: MLflow, Seldon Core - -ZenML facilitates the management of MLOps tools in one place, ensuring no vendor lock-in and flexibility to switch tools as requirements evolve. +ZenML enhances MLOps pipelines by integrating with various tools across different categories, allowing users to streamline their ML workflows. Key integrations include orchestrators like [Airflow](orchestrators/airflow.md) and [Kubeflow](orchestrators/kubeflow.md), experiment trackers such as [MLflow Tracking](experiment-trackers/mlflow.md) and [Weights & Biases](experiment-trackers/wandb.md), and model deployment options like [Seldon Core](model-deployers/seldon.md). ZenML provides flexibility without vendor lock-in, enabling easy tool switching as requirements evolve. ### Available Integrations -A comprehensive list of supported ZenML integrations can be found on the [ZenML integrations webpage](https://zenml.io/integrations) and in the [integrations directory on GitHub](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). +A comprehensive list of supported integrations can be found on the [ZenML integrations webpage](https://zenml.io/integrations) or in the [integrations directory on GitHub](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). ### Installing ZenML Integrations @@ -3302,120 +3318,133 @@ To install integrations, use the command: zenml integration install kubeflow mlflow seldon -y ``` -This command installs the preferred versions of the specified integrations via pip. The `-y` flag confirms the installation without prompting. +This command installs preferred versions via pip: + +```bash +pip install kubeflow== mlflow== seldon== +``` + +The `-y` flag auto-confirms installation prompts. For a full list of CLI commands, run `zenml integration --help`. ### Using `uv` for Package Installation -You can opt to use [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag to the installation command. Ensure `uv` is installed, as this is an experimental feature. +You can use [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: + +```bash +zenml integration install --uv kubeflow mlflow seldon +``` + +Ensure `uv` is installed, as this is an experimental feature. ### Upgrading ZenML Integrations -To upgrade integrations to their latest versions, use: +To upgrade integrations, use: ```bash zenml integration upgrade mlflow pytorch -y ``` -The `-y` flag confirms the upgrade without prompts. If no integrations are specified, all installed integrations will be upgraded. +The `-y` flag confirms upgrades without prompts. If no integrations are specified, all installed integrations will be upgraded. ### Community Contributions -ZenML encourages community involvement in developing new integrations. For more information, refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and the [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md). +ZenML is open to community contributions for new integrations. Refer to the public [roadmap](https://zenml.io/roadmap) for prioritized tools and check the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for details on contributing. ================================================== === File: docs/book/component-guide/component-guide.md === -# Overview of MLOps Components in ZenML - -ZenML categorizes MLOps tools into stack components, each serving a specific function in the MLOps pipeline. These components standardize workflows and can be implemented through custom implementations or built-in integrations. - -## Supported Stack Components +### Overview of MLOps Components in ZenML -| **Type** | **Description** | -|-----------------------|---------------------------------------------------------| -| [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline execution. | -| [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts generated by pipelines. | -| [Container Registry](./container-registries/container-registries.md) | Repository for container images. | -| [Step Operator](./step-operators/step-operators.md) | Executes individual steps in specific environments. | -| [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving. | -| [Feature Store](./feature-stores/feature-stores.md) | Manages data and features. | -| [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments. | -| [Alerter](./alerters/alerters.md) | Sends alerts through designated channels. | -| [Annotator](./annotators/annotators.md) | Labels and annotates data. | -| [Data Validator](./data-validators/data-validators.md) | Validates data and models. | -| [Image Builder](./image-builders/image-builders.md) | Builds container images. | -| [Model Registry](./model-registries/model-registries.md) | Manages ML models. | +MLOps can be overwhelming due to the multitude of tools available. ZenML categorizes these tools into **Stacks and Stack Components** to clarify their roles in MLOps pipelines. Stack components are base abstractions that standardize workflows, allowing users to implement custom components or utilize built-in integrations. -Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional based on MLOps maturity. +#### Supported Stack Components: +| **Type** | **Description** | +|-----------------------|----------------------------------------------------------| +| [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline runs | +| [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts created by pipelines | +| [Container Registry](./container-registries/container-registries.md) | Stores container images | +| [Step Operator](./step-operators/step-operators.md) | Executes individual steps in runtime environments | +| [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving | +| [Feature Store](./feature-stores/feature-stores.md) | Manages data/features | +| [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments | +| [Alerter](./alerters/alerters.md) | Sends alerts through specified channels | +| [Annotator](./annotators/annotators.md) | Labels and annotates data | +| [Data Validator](./data-validators/data-validators.md) | Validates data and models | +| [Image Builder](./image-builders/image-builders.md) | Builds container images | +| [Model Registry](./model-registries/model-registries.md) | Manages and interacts with ML models | -## Custom Component Flavors +Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional as the pipeline matures. -Users can create custom component flavors to modify ZenML's behavior. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides for component types, such as the [custom orchestrator guide](orchestrators/custom.md). +#### Custom Component Flavors +Users can create custom component flavors to tailor ZenML's behavior. For more information, refer to the [guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides for component types, such as the [custom orchestrator guide](orchestrators/custom.md). ================================================== === File: docs/book/component-guide/README.md === -# Overview of ZenML MLOps Components and Integrations +# Overview of MLOps Components and Integrations in ZenML -ZenML categorizes MLOps tools into stack components to streamline the understanding and implementation of MLOps pipelines. Each stack component serves a specific function in the pipeline, and ZenML standardizes these components as base abstractions. Users can implement custom stack components or utilize built-in integrations. +ZenML categorizes MLOps tools into distinct **Stack Components**, each serving a specific function in the MLOps pipeline. This categorization helps standardize workflows and allows users to implement or integrate these components into their pipelines. Essential stack components include: -## Stack Components -ZenML currently supports the following stack components, each with a distinct role: - -| **Component Type** | **Description** | -|--------------------------|-----------------------------------------------------------| -| **Orchestrator** | Manages pipeline runs | -| **Artifact Store** | Stores artifacts generated by pipelines | -| **Container Registry** | Stores container images | -| **Data Validator** | Validates data and models | -| **Experiment Tracker** | Tracks machine learning experiments | -| **Model Deployer** | Handles online model serving | -| **Step Operator** | Executes individual steps in specialized environments | -| **Alerter** | Sends alerts through specified channels | -| **Image Builder** | Builds container images | -| **Annotator** | Labels and annotates data | -| **Model Registry** | Manages and interacts with ML models | -| **Feature Store** | Manages data/features | - -Each ZenML pipeline requires at least an orchestrator and an artifact store, while other components can be added as needed. +| **Type of Stack Component** | **Description** | +|-----------------------------|------------------| +| [Orchestrator](orchestrators/orchestrators.md) | Manages pipeline runs | +| [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | +| [Container Registry](container-registries/container-registries.md) | Stores container images | +| [Data Validator](data-validators/data-validators.md) | Validates data and models | +| [Experiment Tracker](experiment-trackers/experiment-trackers.md) | Tracks ML experiments | +| [Model Deployer](model-deployers/model-deployers.md) | Online model serving platforms | +| [Step Operator](step-operators/step-operators.md) | Executes pipeline steps in specific environments | +| [Alerter](alerters/alerters.md) | Sends alerts through channels | +| [Image Builder](image-builders/image-builders.md) | Builds container images | +| [Annotator](annotators/annotators.md) | Labels and annotates data | +| [Model Registry](model-registries/model-registries.md) | Manages ML models | +| [Feature Store](feature-stores/feature-stores.md) | Manages data/features | + +Each ZenML pipeline requires at least an orchestrator and an artifact store, with other components being optional based on the pipeline's maturity. ## Custom Component Flavors -Users can create custom components by writing their own component flavors. For detailed guidance, refer to the [custom stack component documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). -## Integrations -ZenML enhances MLOps processes by integrating with various tools, allowing users to orchestrate workflows and track experiments. Examples include: +Users can create custom component **flavors** to tailor ZenML's behavior. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) and specialized guides like the [custom orchestrator guide](orchestrators/custom.md). -- **Orchestrators**: [Airflow](orchestrators/airflow.md), [Kubeflow](orchestrators/kubeflow.md) -- **Experiment Trackers**: [MLflow Tracking](experiment-trackers/mlflow.md), [Weights & Biases](experiment-trackers/wandb.md) -- **Model Deployment**: [MLflow](model-deployers/mlflow.md), [Seldon Core](model-deployers/seldon.md) +## Integrations -ZenML provides flexibility, enabling users to switch tools without vendor lock-in. +ZenML enhances MLOps processes by integrating with various tools, allowing users to orchestrate workflows with tools like [Airflow](orchestrators/airflow.md) or [Kubeflow](orchestrators/kubeflow.md), track experiments with [MLflow Tracking](experiment-trackers/mlflow.md) or [Weights & Biases](experiment-trackers/wandb.md), and deploy models using [Seldon Core](model-deployers/seldon.md). This integration flexibility prevents vendor lock-in and enables easy tool switching. ### Available Integrations -A comprehensive list of supported integrations can be found on the [ZenML integrations page](https://zenml.io/integrations) and in the [GitHub integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). + +A comprehensive list of supported integrations can be found on the [ZenML integrations page](https://zenml.io/integrations) or in the [integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). ### Installing ZenML Integrations + To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` -This command installs the preferred versions via pip. +This command installs preferred versions via pip: + +```bash +pip install kubeflow== mlflow== seldon== +``` + +The `-y` flag confirms installations without prompts. For a complete list of CLI commands, run `zenml integration --help`. ### Upgrade ZenML Integrations + To upgrade integrations, use: ```bash zenml integration upgrade mlflow pytorch -y ``` +If no integrations are specified, all installed integrations will be upgraded. + ### Community Contributions -ZenML encourages contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for details. -For further information on using `uv` as a package manager, refer to the [Astral documentation](https://docs.astral.sh/uv/guides/integration/pytorch/). +ZenML welcomes community contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and the [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more information. ================================================== @@ -3423,84 +3452,77 @@ For further information on using `uv` as a package manager, refer to the [Astral ### Summary of Evidently Data Validator Documentation -**Overview** -Evidently, an open-source library, is integrated with ZenML to monitor and validate machine learning models by analyzing data quality, data drift, model drift, and performance. It generates reports and runs checks that can be used for automated corrective actions or visual interpretations. +**Overview:** +The Evidently Data Validator, integrated with ZenML, utilizes the Evidently library to perform data quality, data drift, model drift, and model performance analyses. It generates reports and checks that can be used for automated corrective actions or visual interpretations. -**Key Features** -Evidently supports: -- **Data Quality**: Provides feature statistics and behavior overviews for single or comparative datasets. -- **Data Drift**: Detects changes in feature distributions between datasets with identical schemas. -- **Target Drift**: Analyzes changes in target functions or model predictions. -- **Performance Evaluation**: Assesses model performance using datasets with target and prediction columns. +**Usage Scenarios:** +Evidently is suitable for monitoring and debugging machine learning models through: +- **Data Quality Reports:** Analyze feature statistics and compare datasets. +- **Data Drift Reports:** Detect changes in feature distributions between datasets. +- **Target Drift Reports:** Explore changes in target functions or model predictions. +- **Model Performance Reports:** Evaluate model performance against datasets. -**Installation** -To use the Evidently Data Validator with ZenML, install the integration: +**Deployment:** +To deploy the Evidently Data Validator: ```shell zenml integration install evidently -y +zenml data-validator register evidently_data_validator --flavor=evidently +zenml stack register custom_stack -dv evidently_data_validator ... --set ``` -**Usage** -Evidently can be used in ZenML pipelines through three methods: -1. **Standard Report Step**: Recommended for ease of use. -2. **Custom Step Implementation**: Provides flexibility for specific pipeline needs. -3. **Direct Library Use**: Full control over Evidently features. - -**Example of Standard Report Step** -Instantiate and configure the Evidently report step: -```python -from zenml.integrations.evidently.steps import evidently_report_step +**Data Profiling:** +Evidently's profiling functions require a `pandas.DataFrame` or two datasets. It generates a `Report` object without needing a model. The data must include `target` and `prediction` columns for certain reports. -text_data_report = evidently_report_step.with_options( - parameters=dict( - column_mapping=EvidentlyColumnMapping( - target="Rating", - numerical_features=["Age", "Positive_Feedback_Count"], - categorical_features=["Division_Name", "Department_Name", "Class_Name"], - text_features=["Review_Text", "Title"], - ), - metrics=[ - EvidentlyMetricConfig.metric("DataQualityPreset"), - EvidentlyMetricConfig.metric("TextOverviewPreset", column_name="Review_Text"), - ], - download_nltk_data=True, - ), -) -``` +**Using Evidently in ZenML Pipelines:** +1. **Standard Report Step:** + ```python + from zenml.integrations.evidently.steps import evidently_report_step + + text_data_report = evidently_report_step.with_options( + parameters=dict( + column_mapping=EvidentlyColumnMapping(...), + metrics=[EvidentlyMetricConfig.metric("DataQualityPreset"), ...], + download_nltk_data=True, + ), + ) + ``` -**Pipeline Integration** -Integrate the report step into a pipeline: -```python -@pipeline(enable_cache=False) -def text_data_report_test_pipeline(): - data = data_loader() - reference_dataset, comparison_dataset = data_splitter(data) - report, _ = text_data_report(reference_dataset=reference_dataset, comparison_dataset=comparison_dataset) -``` +2. **Pipeline Example:** + ```python + @pipeline(enable_cache=False) + def text_data_report_test_pipeline(): + data = data_loader() + reference_dataset, comparison_dataset = data_splitter(data) + report, _ = text_data_report(reference_dataset=reference_dataset, comparison_dataset=comparison_dataset) + ``` -**Data Validation** -Similar to profiling, Evidently can run automated data validation tests: +**Data Validation:** +Evidently can also run automated data validation tests: ```python from zenml.integrations.evidently.steps import evidently_test_step text_data_test = evidently_test_step.with_options( parameters=dict( - column_mapping=EvidentlyColumnMapping( - target="Rating", - numerical_features=["Age", "Positive_Feedback_Count"], - categorical_features=["Division_Name", "Department_Name", "Class_Name"], - text_features=["Review_Text", "Title"], - ), - tests=[EvidentlyTestConfig.test("DataQualityTestPreset")], + column_mapping=EvidentlyColumnMapping(...), + tests=[EvidentlyTestConfig.test("DataQualityTestPreset"), ...], download_nltk_data=True, ), ) ``` -**Direct Library Use** -You can also directly use Evidently in custom steps: +**Custom Steps:** +You can create custom steps for data profiling and validation: ```python -from evidently.report import Report +@step +def data_profiling(reference_dataset: pd.DataFrame, comparison_dataset: pd.DataFrame): + data_validator = EvidentlyDataValidator.get_active_data_validator() + report = data_validator.data_profiling(...) + return report.json(), HTMLString(report.show(mode="inline").data) +``` +**Directly Using Evidently:** +You can also directly utilize the Evidently library in custom steps: +```python @step def data_profiler(dataset: pd.DataFrame): report = Report(metrics=[metric_preset.DataQualityPreset()]) @@ -3508,8 +3530,8 @@ def data_profiler(dataset: pd.DataFrame): return report.json(), HTMLString(report.show(mode="inline").data) ``` -**Visualization** -Reports can be visualized in the ZenML dashboard or Jupyter notebooks: +**Visualizing Reports:** +Reports can be visualized in the ZenML dashboard or Jupyter notebooks using: ```python def visualize_results(pipeline_name: str, step_name: str): pipeline = Client().get_pipeline(pipeline=pipeline_name) @@ -3517,8 +3539,7 @@ def visualize_results(pipeline_name: str, step_name: str): evidently_step.visualize() ``` -**Conclusion** -Evidently provides a comprehensive set of tools for data and model validation, making it easier to maintain data quality and model performance in machine learning workflows. For detailed configurations and options, refer to the official Evidently documentation. +This documentation provides a comprehensive guide on how to implement and utilize the Evidently Data Validator within ZenML for effective data and model monitoring. For detailed configurations and metrics, refer to the official Evidently documentation. ================================================== @@ -3526,91 +3547,73 @@ Evidently provides a comprehensive set of tools for data and model validation, m ### Summary of Deepchecks Integration with ZenML -**Overview**: -Deepchecks is an open-source library for validating data and models in ZenML pipelines. It performs tests for data integrity, data drift, model drift, and model performance with minimal user configuration. +**Overview** +Deepchecks, an open-source library, is integrated with ZenML to validate data and models in pipelines. It supports various tests for data integrity, drift, and model performance, applicable to both tabular and computer vision data. -**Supported Formats**: +**Supported Formats** - **Tabular Data**: `pandas.DataFrame`, models as `sklearn.base.ClassifierMixin`. -- **Computer Vision Data**: `torch.utils.data.dataloader.DataLoader`, models as `torch.nn.Module`. +- **Computer Vision**: `torch.utils.data.dataloader.DataLoader`, models as `torch.nn.Module`. -**Key Features**: -- **Data Integrity Checks**: Identify issues like missing values and conflicting labels. -- **Data Drift Checks**: Detect data skew by comparing target and reference datasets. -- **Model Performance Checks**: Evaluate model performance using metrics like confusion matrix. -- **Multi-Model Performance Reports**: Summarize performance scores for multiple models. +**Key Features** +- **Data Integrity Checks**: Identify issues like missing values and mixed data types. +- **Data Drift Checks**: Compare datasets to detect feature and label drift. +- **Model Performance Checks**: Evaluate model performance using metrics like confusion matrices. +- **Multi-Model Performance Reports**: Summarize performance across multiple models. -**Installation**: -To install the Deepchecks integration: +**Installation** +To use Deepchecks with ZenML, install the integration: ```shell zenml integration install deepchecks -y ``` -**Registering the Data Validator**: +**Registering the Data Validator** +Register the Deepchecks Data Validator in your stack: ```shell zenml data-validator register deepchecks_data_validator --flavor=deepchecks zenml stack register custom_stack -dv deepchecks_data_validator ... --set ``` -**Usage**: -Deepchecks validation checks are organized into four categories: +**Using Deepchecks in Pipelines** +Deepchecks validation checks are categorized based on input requirements: 1. **Data Integrity Checks**: Single dataset input. 2. **Data Drift Checks**: Two datasets (target and reference). 3. **Model Validation Checks**: Single dataset and model input. 4. **Model Drift Checks**: Two datasets and a model input. -**Standard Steps**: -- `deepchecks_data_integrity_check_step`: Run data integrity tests. -- `deepchecks_data_drift_check_step`: Run data drift tests. -- `deepchecks_model_validation_check_step`: Run model performance tests. -- `deepchecks_model_drift_check_step`: Run model comparison tests. +**Standard Steps** +ZenML provides four standard steps for Deepchecks: +- `deepchecks_data_integrity_check_step` +- `deepchecks_data_drift_check_step` +- `deepchecks_model_validation_check_step` +- `deepchecks_model_drift_check_step` -**Example of Data Integrity Check**: +Example of a data integrity check step: ```python from zenml.integrations.deepchecks.steps import deepchecks_data_integrity_check_step data_validator = deepchecks_data_integrity_check_step.with_options( parameters=dict(dataset_kwargs=dict(label="target", cat_features=[])), ) - -@pipeline -def data_validation_pipeline(): - df_train, df_test = data_loader() - data_validator(dataset=df_train) - -data_validation_pipeline() ``` -**Customizing Checks**: -You can specify a custom list of checks: +**Customizing Checks** +You can specify a custom list of checks and additional keyword arguments: ```python -from zenml.integrations.deepchecks.validation_checks import DeepchecksDataIntegrityCheck - deepchecks_data_integrity_check_step( - check_list=[ - DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES, - DeepchecksDataIntegrityCheck.TABULAR_DATA_DUPLICATES, - ], - dataset=... + check_list=[DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES], + dataset_kwargs=dict(label='class', cat_features=['country', 'state']), ) ``` -**Docker Configuration for Remote Orchestrators**: -Create a `deepchecks-zenml.Dockerfile`: +**Docker Configuration for Remote Orchestrators** +For remote orchestrators, extend the Docker image to include required binaries: ```shell ARG ZENML_VERSION=0.20.0 FROM zenmldocker/zenml:${ZENML_VERSION} AS base RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y ``` -Use it in your pipeline definition: -```python -docker_settings = DockerSettings(dockerfile="deepchecks-zenml.Dockerfile") - -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... -``` -**Visualizing Results**: +**Visualizing Results** Results can be visualized in the ZenML dashboard or Jupyter notebooks: ```python from zenml.client import Client @@ -3622,7 +3625,7 @@ def visualize_results(pipeline_name: str, step_name: str) -> None: step.visualize() ``` -This summary encapsulates the essential technical details of using Deepchecks with ZenML while omitting redundancy and verbose explanations. +This concise summary captures the essential technical details and functionalities of the Deepchecks integration with ZenML, ensuring that critical information is preserved while eliminating redundancy. ================================================== @@ -3630,45 +3633,37 @@ This summary encapsulates the essential technical details of using Deepchecks wi # Data Validators -Data Validators are essential tools in machine learning (ML) that ensure data quality and monitor model performance throughout the ML project lifecycle. They facilitate data profiling, integrity testing, and drift detection at various stages, including data ingestion, model training, and inference. - -## Key Points - -- **Purpose**: Maintain data quality and model performance. -- **Techniques**: Data profiling, integrity testing, and drift detection. -- **Visualization**: Data profiles and performance evaluations can be analyzed to identify issues. - -### Integration with ZenML - -- Data Validators are optional components in ZenML stacks. -- They generate versioned data profiles and quality reports stored in the [Artifact Store](../artifact-stores/artifact-stores.md). -- Data Validators are useful for: - - Logging data quality and model performance during development. - - Running integrity checks on regularly ingested data. - - Comparing new training data and model performance in continuous training pipelines. - - Analyzing data drift in batch and online inference scenarios. - -### Available Data Validators +Data Validators are essential tools for ensuring data quality and monitoring model performance throughout the machine learning lifecycle. They help detect issues such as data integrity, data drift, and model drift at various stages, including data ingestion, model training, and inference. -| Data Validator | Features | Data Types | Model Types | Notes | Flavor/Integration | -|----------------------|------------------------------------------------|----------------------------------------------|-------------------------------------------|------------------------------------------------|-------------------------| -| [Deepchecks](deepchecks.md) | data quality, drift, performance | tabular: `pandas.DataFrame`, CV: `torch.utils.data.dataloader.DataLoader` | tabular: `sklearn.base.ClassifierMixin`, CV: `torch.nn.Module` | Adds validation tests to pipelines | `deepchecks` | -| [Evidently](evidently.md) | data quality, drift, performance | tabular: `pandas.DataFrame` | N/A | Generates reports and visualizations | `evidently` | -| [Great Expectations](great-expectations.md) | profiling, data quality | tabular: `pandas.DataFrame` | N/A | Performs data testing and profiling | `great_expectations` | -| [Whylogs/WhyLabs](whylogs.md) | data drift | tabular: `pandas.DataFrame` | N/A | Generates data profiles for WhyLabs | `whylogs` | - -### Usage - -1. **Configuration**: Add a Data Validator to your ZenML stack. -2. **Integration**: Use built-in validation steps in pipelines or libraries directly in custom steps. -3. **Artifact Management**: Access and visualize validation artifacts in subsequent steps or fetch them later. +## Key Concepts +- **Data Validators**: Optional components in ZenML stacks that generate data profiles and quality reports, stored in the [Artifact Store](../artifact-stores/artifact-stores.md). +- **Data-Centric AI**: Incorporating Data Validators supports data-centric practices in ML workflows. -For a list of available Data Validator flavors, use: +## Use Cases +1. **Early Development**: Log data quality and model performance. +2. **Regular Data Ingestion**: Conduct integrity checks to prevent downstream issues. +3. **Continuous Training**: Compare new training data and model performance against references. +4. **Batch and Online Inference**: Analyze data drift and detect discrepancies between training and serving data. + +## Data Validator Flavors +| Data Validator | Features | Data Types | Model Types | Notes | Flavor/Integration | +|----------------|----------|-------------|-------------|-------|--------------------| +| [Deepchecks](deepchecks.md) | Data quality, drift, performance | `pandas.DataFrame`, `torch.utils.data.dataloader.DataLoader` | `sklearn.base.ClassifierMixin`, `torch.nn.Module` | Validation tests for pipelines | `deepchecks` | +| [Evidently](evidently.md) | Data quality, drift, performance | `pandas.DataFrame` | N/A | Generates reports and visualizations | `evidently` | +| [Great Expectations](great-expectations.md) | Profiling, quality | `pandas.DataFrame` | N/A | Data testing and documentation | `great_expectations` | +| [Whylogs/WhyLabs](whylogs.md) | Data drift | `pandas.DataFrame` | N/A | Generates data profiles | `whylogs` | + +To view available Data Validator flavors, use: ```shell zenml data-validator flavor list ``` -Consult specific Data Validator documentation for detailed usage instructions. +## Usage Steps +1. **Configuration**: Add a Data Validator to your ZenML stack. +2. **Integration**: Use built-in validation steps in pipelines or directly call libraries in custom steps. +3. **Artifact Management**: Access and visualize validation artifacts in subsequent steps or fetch them later for processing. + +Refer to specific [Data Validator flavor documentation](data-validators.md#data-validator-flavors) for detailed usage instructions. ================================================== @@ -3676,48 +3671,48 @@ Consult specific Data Validator documentation for detailed usage instructions. ### Summary of Whylogs/WhyLabs Profiling Documentation -#### Overview -The **whylogs/WhyLabs Data Validator** integrates with ZenML to generate and track data profiles using the **whylogs** library. These profiles provide descriptive statistics of data, enabling automated corrective actions and visual interpretations. +**Overview**: The whylogs/WhyLabs integration with ZenML enables the collection and visualization of data profiles, which are statistical summaries of your data. These profiles can be used for automated corrective actions and visual analysis. -#### Use Cases -Use whylogs for: -- **Data Quality**: Validate model input data quality. -- **Data Drift**: Detect changes in model input features. +**Use Cases**: +- **Data Quality**: Validate inputs in models or pipelines. +- **Data Drift**: Detect shifts in model input features. - **Model Drift**: Identify training-serving skew and performance degradation. -**Note**: Currently, the integration only supports tabular data in `pandas.DataFrame` format. - -#### Deployment -To deploy the whylogs Data Validator, install the integration: -```shell -zenml integration install whylogs -y -``` -Register the Data Validator: -```shell -zenml data-validator register whylogs_data_validator --flavor=whylogs -zenml stack register custom_stack -dv whylogs_data_validator ... --set -``` -For WhyLabs logging, create a ZenML Secret for authentication: -```shell -zenml secret create whylabs_secret \ - --whylabs_default_org_id= \ - --whylabs_api_key= +**Deployment**: +1. Install the integration: + ```shell + zenml integration install whylogs -y + ``` +2. Register the Data Validator: + ```shell + zenml data-validator register whylogs_data_validator --flavor=whylogs + zenml stack register custom_stack -dv whylogs_data_validator ... --set + ``` +3. For WhyLabs logging, create a secret for authentication: + ```shell + zenml secret create whylabs_secret \ + --whylabs_default_org_id= \ + --whylabs_api_key= + zenml data-validator register whylogs_data_validator --flavor=whylogs \ + --authentication_secret=whylabs_secret + ``` -zenml data-validator register whylogs_data_validator --flavor=whylogs \ - --authentication_secret=whylabs_secret -``` -Enable logging in custom steps: -```python -@step(settings={"data_validator": WhylogsDataValidatorSettings(enable_whylabs=True, dataset_id="model-1")}) -def data_loader() -> Tuple[Annotated[pd.DataFrame, "data"], Annotated[DatasetProfileView, "profile"]]: - ... -``` +**Pipeline Integration**: +- Enable WhyLabs logging in custom steps: + ```python + @step(settings={"data_validator": WhylogsDataValidatorSettings(enable_whylabs=True, dataset_id="model-1")}) + def data_loader() -> Tuple[Annotated[pd.DataFrame, "data"], Annotated[DatasetProfileView, "profile"]]: + X, y = datasets.load_diabetes(return_X_y=True, as_frame=True) + df = pd.merge(X, y, left_index=True, right_index=True) + profile = why.log(pandas=df).profile().view() + return df, profile + ``` -#### Usage -Three methods to use whylogs in ZenML pipelines: -1. **Standard `WhylogsProfilerStep`**: Easy to use but limited customization. -2. **Custom Step with Data Validator**: More flexibility but limited to Data Validator functionality. -3. **Directly using whylogs**: Full control over features. +**Using Whylogs**: +- Three methods to utilize whylogs: + 1. **Standard Step**: Use `WhylogsProfilerStep` for ease of use. + 2. **Data Validator Methods**: Call methods in custom steps for flexibility. + 3. **Direct Library Use**: Leverage the whylogs library directly for full control. **Example of Standard Step**: ```python @@ -3726,16 +3721,18 @@ from zenml.integrations.whylogs.steps import get_whylogs_profiler_step train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") ``` -**Pipeline Example**: +**Data Validator Implementation**: ```python -@pipeline -def data_profiling_pipeline(): - data, _ = data_loader() - train_data_profiler(train) +@step(settings={"data_validator": WhylogsDataValidatorSettings(enable_whylabs=True, dataset_id="")}) +def data_profiler(dataset: pd.DataFrame) -> DatasetProfileView: + data_validator = WhylogsDataValidator.get_active_data_validator() + profile = data_validator.data_profiling(dataset) + data_validator.upload_profile_view(profile) + return profile ``` -#### Visualizing Profiles -Profiles can be visualized in the ZenML dashboard or Jupyter notebooks: +**Visualizing Profiles**: +- View profiles in the ZenML dashboard or use Jupyter notebooks: ```python def visualize_statistics(step_name: str, reference_step_name: Optional[str] = None) -> None: pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") @@ -3743,8 +3740,7 @@ def visualize_statistics(step_name: str, reference_step_name: Optional[str] = No whylogs_step.visualize() ``` -### Conclusion -The whylogs/WhyLabs integration with ZenML provides a robust framework for data profiling, validation, and visualization, particularly suited for managing data quality and drift in machine learning pipelines. For further details, refer to the official documentation for whylogs and ZenML. +This summary captures the essential technical details and key points from the documentation, enabling another LLM to answer questions effectively. ================================================== @@ -3752,42 +3748,45 @@ The whylogs/WhyLabs integration with ZenML provides a robust framework for data ### Great Expectations Integration with ZenML -**Overview**: Great Expectations is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows users to run data validation tests on `pandas.DataFrame` datasets within their pipelines, generating documentation and enabling automated corrective actions. +**Overview**: Great Expectations is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows users to implement data validation in pipelines using the Great Expectations Data Validator. -**Key Features**: +#### Key Features: - **Data Profiling**: Automatically generates validation rules (Expectations) from dataset properties. -- **Data Quality Checks**: Validates datasets against predefined or inferred Expectations. -- **Data Documentation**: Maintains human-readable documentation of validation rules and results. +- **Data Quality**: Validates datasets against predefined or inferred Expectations. +- **Data Docs**: Generates human-readable documentation of validation rules and results. -**Deployment**: -1. **Installation**: +#### When to Use: +Utilize the Great Expectations Data Validator when needing automated data validation features for `pandas.DataFrame` datasets in ZenML pipelines. + +#### Installation: +To install the Great Expectations integration: +```shell +zenml integration install great_expectations -y +``` + +#### Deployment Options: +1. **Let ZenML Manage Configuration**: ZenML initializes and manages Great Expectations configuration, storing Expectation Suites and Validation Results in the ZenML Artifact Store. ```shell - zenml integration install great_expectations -y + zenml data-validator register ge_data_validator --flavor=great_expectations + zenml stack register custom_stack -dv ge_data_validator ... --set ``` -2. **Registering Data Validator**: - - To let ZenML manage the configuration: - ```shell - zenml data-validator register ge_data_validator --flavor=great_expectations - zenml stack register custom_stack -dv ge_data_validator ... --set - ``` - - To use an existing configuration: - ```shell - zenml data-validator register ge_data_validator --flavor=great_expectations --context_root_dir=/path/to/my/great_expectations - zenml stack register custom_stack -dv ge_data_validator ... --set - ``` - - To migrate configuration to ZenML: - ```shell - zenml data-validator register ge_data_validator --flavor=great_expectations --context_config=@/path/to/my/great_expectations/great_expectations.yaml - zenml stack register custom_stack -dv ge_data_validator ... --set - ``` +2. **Use Existing Configuration**: Point to an existing `great_expectations.yaml` file. + ```shell + zenml data-validator register ge_data_validator --flavor=great_expectations --context_root_dir=/path/to/my/great_expectations + ``` -**Advanced Configuration**: -- `configure_zenml_stores`: Automatically updates configuration to use ZenML's Artifact Store. +3. **Migrate Configuration to ZenML**: Load existing configuration using the `@` operator. + ```shell + zenml data-validator register ge_data_validator --flavor=great_expectations --context_config=@/path/to/my/great_expectations/great_expectations.yaml + ``` + +#### Advanced Configuration: +- `configure_zenml_stores`: Automatically updates Great Expectations configuration to use ZenML Artifact Store. - `configure_local_docs`: Sets up a local Data Docs site for visualization. -**Usage in Pipelines**: -1. **Data Profiler Step**: +#### Usage in Pipelines: +- **Data Profiler Step**: Automatically generates an Expectation Suite. ```python from zenml.integrations.great_expectations.steps import great_expectations_profiler_step @@ -3796,7 +3795,7 @@ The whylogs/WhyLabs integration with ZenML provides a robust framework for data ) ``` -2. **Data Validator Step**: +- **Data Validator Step**: Validates a dataset against an existing Expectation Suite. ```python from zenml.integrations.great_expectations.steps import great_expectations_validator_step @@ -3805,23 +3804,8 @@ The whylogs/WhyLabs integration with ZenML provides a robust framework for data ) ``` -**Example Pipeline**: -```python -from zenml import pipeline - -@pipeline(settings={"docker": docker_settings}) -def profiling_pipeline(): - dataset, _ = importer() - ge_profiler_step(dataset) - -@pipeline(settings={"docker": docker_settings}) -def validation_pipeline(): - dataset, condition = importer() - results = ge_validator_step(dataset, condition) -``` - -**Direct Use of Great Expectations**: -You can directly interact with the Great Expectations library using the Data Context managed by ZenML: +#### Direct Use of Great Expectations: +You can directly interact with Great Expectations in custom steps, using the ZenML-managed Data Context: ```python import great_expectations as ge from zenml.integrations.great_expectations.data_validators import GreatExpectationsDataValidator @@ -3830,14 +3814,14 @@ from zenml.integrations.great_expectations.data_validators import GreatExpectati def create_custom_expectation_suite() -> ExpectationSuite: context = GreatExpectationsDataValidator.get_data_context() suite = context.create_expectation_suite(expectation_suite_name="custom_suite") - # Add expectations... + # Add expectations and save context.save_expectation_suite(suite) context.build_data_docs() return suite ``` -**Visualization**: -Results can be visualized in the ZenML dashboard or using Jupyter notebooks: +#### Visualization: +Results can be visualized in the ZenML dashboard or using the `artifact.visualize()` method in Jupyter notebooks: ```python from zenml.client import Client @@ -3848,7 +3832,7 @@ def visualize_results(pipeline_name: str, step_name: str) -> None: validation_step.visualize() ``` -This summary captures the essential details for using Great Expectations with ZenML, including installation, configuration, and usage in data pipelines. +This integration provides a robust framework for ensuring data quality and maintaining documentation within data pipelines. ================================================== @@ -3857,20 +3841,16 @@ This summary captures the essential details for using Great Expectations with Ze ### Developing a Custom Data Validator in ZenML #### Overview -Before creating a custom data validator, familiarize yourself with ZenML's [general guide on custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that the base abstraction for Data Validators is under development, and extending them is not recommended until it's finalized. - -#### Existing Data Validators -ZenML provides various built-in [Data Validator implementations](./data-validators.md#data-validator-flavors) that utilize different data logging and validation libraries. If you need a different backend, you can create a custom implementation. +To create a custom Data Validator in ZenML, it's recommended to first review the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that the base abstraction for Data Validators is in progress, and extensions are not currently recommended. You can choose from existing Data Validator flavors or implement your own, but be prepared for potential refactoring when the base abstraction is released. -### Steps to Build a Custom Data Validator - -1. **Create a Class**: Inherit from `BaseDataValidator` and override necessary abstract methods based on your chosen library/service. +#### Steps to Build a Custom Data Validator +1. **Create a Class**: Inherit from the `BaseDataValidator` class and override necessary abstract methods based on your chosen library/service. 2. **Configuration Class**: If configuration is needed, inherit from `BaseDataValidatorConfig`. 3. **Combine Classes**: Inherit from `BaseDataValidatorFlavor` to integrate both classes. -4. **Standard Steps (Optional)**: Provide standard steps for easy integration into pipelines. +4. **Provide Standard Steps**: Optionally, include standard steps for easy integration into pipelines. -#### Registering the Custom Data Validator -Use the CLI to register your flavor class with dot notation: +#### Registration +Register your custom Data Validator flavor using the CLI with dot notation: ```shell zenml data-validator flavor register @@ -3882,48 +3862,49 @@ For example, if your flavor class is in `flavors/my_flavor.py`: zenml data-validator flavor register flavors.my_flavor.MyDataValidatorFlavor ``` -#### Important Notes -- Ensure ZenML is initialized at the root of your repository to avoid resolution issues. -- After registration, verify your flavor is listed: +Ensure ZenML is initialized at the root of your repository to avoid resolution issues. + +#### Verification +After registration, verify the new flavor is available: ```shell zenml data-validator flavor list ``` -#### Class Utilization -- **CustomDataValidatorFlavor**: Used during flavor creation via CLI. -- **CustomDataValidatorConfig**: Validates user input during stack component registration. -- **CustomDataValidator**: Engaged when the component is in use. +#### Important Notes +- The **CustomDataValidatorFlavor** is used during flavor creation via CLI. +- The **CustomDataValidatorConfig** is utilized during stack component registration to validate user inputs. +- The **CustomDataValidator** is invoked when the component is in use, allowing separation of flavor configuration from implementation. -This design separates flavor configuration from implementation, allowing registration even if dependencies are not installed locally. +This design enables registration of flavors and components even if their dependencies are not installed locally, provided the flavor and config classes are in a different module/path from the actual validator. ================================================== === File: docs/book/component-guide/step-operators/sagemaker.md === -### Summary of SageMaker Step Operator Documentation +### Amazon SageMaker Step Operator Overview -**Overview**: -Amazon SageMaker provides specialized compute instances for training jobs and a UI for model management. ZenML's SageMaker step operator enables the execution of individual steps on SageMaker compute instances. +Amazon SageMaker provides compute instances for training jobs and a UI for model management. ZenML's SageMaker step operator enables the execution of individual pipeline steps on SageMaker instances. -**When to Use**: -Utilize the SageMaker step operator if: -- Your pipeline steps require additional computing resources not available in your orchestrator. -- You have access to SageMaker. For other cloud providers, refer to the Vertex or AzureML step operators. +#### When to Use +Use the SageMaker step operator when: +- Your pipeline requires additional compute resources not provided by your orchestrator. +- You have access to SageMaker. -**Deployment Requirements**: -1. Create an IAM role with at least `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. [Guide here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-create-execution-role). -2. Install the ZenML `aws` integration: +#### Deployment Requirements +1. **IAM Role**: Create a role in the IAM console with at least `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. +2. **ZenML AWS Integration**: Install using: ```shell zenml integration install aws ``` -3. Ensure Docker is installed and running. -4. Set up an AWS container registry and a remote artifact store for step artifact management. -5. Choose an instance type for execution. [Available types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). -6. (Optional) Create an experiment to group SageMaker runs. [Creation guide](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-create.html). +3. **Docker**: Ensure Docker is installed and running. +4. **AWS Container Registry**: Set up as part of your stack. +5. **Remote Artifact Store**: Required for reading/writing artifacts. +6. **Instance Type**: Choose an instance type for execution. Refer to [available instance types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). +7. **Optional**: Create an experiment to group SageMaker runs. -**Authentication Methods**: -1. **Service Connector** (recommended): +#### Authentication Methods +1. **Service Connector** (Recommended): - Register a service connector and connect it to the step operator: ```shell zenml service-connector register --type aws -i @@ -3931,13 +3912,13 @@ Utilize the SageMaker step operator if: zenml step-operator connect --connector zenml stack register -s ... --set ``` - + 2. **Implicit Authentication**: - - For local orchestrators, ZenML uses the `default` profile in your AWS config file. + - For local orchestrators, ZenML will use the `default` profile in your AWS configuration. - For remote orchestrators, ensure the environment can authenticate to AWS and assume the specified IAM role. -**Step Execution**: -Once the step operator is added to your stack, use it in your pipeline: +#### Using the Step Operator +To execute steps in SageMaker, specify the step operator in the `@step` decorator: ```python from zenml import step @@ -3945,13 +3926,16 @@ from zenml import step def trainer(...) -> ...: """Train a model.""" ``` -ZenML builds a Docker image `/zenml:` for execution. -**Additional Configuration**: -For further settings, use `SagemakerStepOperatorSettings` when defining your pipeline. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for available attributes. +ZenML builds a Docker image `/zenml:` for running steps in SageMaker. + +#### Additional Configuration +Additional settings can be specified using `SagemakerStepOperatorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for configurable attributes. + +#### Enabling CUDA for GPU +To run steps on GPU, follow the instructions for enabling CUDA, which is essential for full acceleration. -**CUDA for GPU**: -To run steps on GPU, follow the instructions for enabling CUDA. This is essential for full GPU acceleration. +For more details, consult the [ZenML documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.step_operators.sagemaker_step_operator.SagemakerStepOperator). ================================================== @@ -3959,24 +3943,24 @@ To run steps on GPU, follow the instructions for enabling CUDA. This is essentia ### Kubernetes Step Operator Overview -ZenML's Kubernetes step operator enables the execution of individual steps in Kubernetes pods, ideal for pipelines needing additional computing resources not available from the orchestrator. +ZenML's Kubernetes step operator enables the execution of individual steps in Kubernetes pods, ideal for pipelines requiring additional computing resources not provided by the orchestrator. #### When to Use -- When pipeline steps require more CPU, GPU, or memory resources. -- When you have access to a Kubernetes cluster. +- When pipeline steps need more CPU, GPU, or memory than the orchestrator can provide. +- When a Kubernetes cluster is accessible. #### Deployment Requirements -1. **Kubernetes Cluster**: Must be deployed (refer to the cloud guide for options). +1. **Kubernetes Cluster**: Must be deployed (refer to the cloud guide for deployment options). 2. **ZenML Kubernetes Integration**: Install with: ```shell zenml integration install kubernetes ``` 3. **Docker or Remote Image Builder**: Required for building images. -4. **Remote Artifact Store**: Necessary for reading/writing artifacts. +4. **Remote Artifact Store**: Necessary for artifact read/write access. **Recommendation**: Set up a Service Connector for connecting the Kubernetes step operator to the cluster, especially for cloud-managed clusters (AWS, GCP, Azure). -#### Registering the Step Operator +#### Registering and Using the Step Operator You can register the step operator in two ways: 1. **Using a Service Connector**: @@ -3991,14 +3975,13 @@ You can register the step operator in two ways: zenml step-operator register --flavor=kubernetes --kubernetes_context= ``` -#### Updating the Active Stack -Add the step operator to your active stack: +Update the active stack to include the step operator: ```shell zenml stack update -s ``` -#### Using the Step Operator -Specify the step operator in the `@step` decorator: +#### Defining Steps +To execute a step in Kubernetes, specify the step operator in the `@step` decorator: ```python from zenml import step @@ -4007,22 +3990,16 @@ def trainer(...) -> ...: """Train a model.""" ``` -#### Interacting with Pods -For debugging, interact with pods using `kubectl`. Pods are labeled for easy identification: -- `run`: ZenML run name -- `pipeline`: ZenML pipeline name +ZenML builds Docker images containing your code for execution in Kubernetes. -Example command to delete pods: +#### Interacting with Pods +For debugging, you can interact with pods using their labels: ```shell kubectl delete pod -n zenml -l pipeline=kubernetes_example_pipeline ``` #### Additional Configuration -Use `KubernetesStepOperatorSettings` to configure: -- **Pod Settings**: Node selectors, labels, affinity, tolerations, image pull secrets. -- **Service Account**: Specify the service account for pods. - -Example configuration: +Customize the Kubernetes step operator using `KubernetesStepOperatorSettings`: ```python from zenml.integrations.kubernetes.flavors import KubernetesStepOperatorSettings @@ -4031,11 +4008,10 @@ kubernetes_settings = KubernetesStepOperatorSettings( "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": { "requests": {"cpu": "2", "memory": "4Gi"}, - "limits": {"cpu": "4", "memory": "8Gi"}, + "limits": {"cpu": "4", "memory": "8Gi"} }, - }, - kubernetes_namespace="ml-pipelines", - service_account_name="zenml-pipeline-runner" + "service_account_name": "zenml-pipeline-runner" + } ) @step(settings={"step_operator": kubernetes_settings}) @@ -4043,10 +4019,10 @@ def my_kubernetes_step(): ... ``` -#### GPU Configuration -To utilize GPUs, follow specific instructions to enable CUDA and customize settings for GPU acceleration. +Refer to the SDK docs for a complete list of attributes and detailed configuration options. -For more details, refer to the SDK documentation for available attributes and configuration options. +#### Enabling CUDA for GPU +To run steps on GPU, follow specific instructions to enable CUDA for full acceleration. ================================================== @@ -4054,16 +4030,15 @@ For more details, refer to the SDK documentation for available attributes and co ### Modal Step Operator Overview -**Modal** is a cloud infrastructure platform optimized for fast execution, particularly for Docker image builds and hardware provisioning. The **ZenML Modal step operator** allows users to run individual steps on Modal compute instances. +**Modal** is a cloud infrastructure platform that provides specialized compute instances for running code, particularly efficient for building Docker images and provisioning hardware. The **ZenML Modal step operator** allows submission of individual steps to Modal compute instances. #### When to Use -Utilize the Modal step operator if: -- Fast execution is required for resource-intensive steps (CPU, GPU, memory). -- Specific hardware requirements need to be defined (e.g., GPU type, CPU count). -- You have access to Modal. +- Fast execution for resource-intensive steps (CPU, GPU, memory). +- Specific hardware requirements (GPU type, CPU count, memory). +- Access to Modal. #### Deployment Steps -1. **Sign Up**: Create a Modal account [here](https://modal.com/signup). +1. **Sign Up**: Create a Modal account. 2. **Install CLI**: Run: ```shell pip install modal @@ -4071,7 +4046,7 @@ Utilize the Modal step operator if: ``` #### Usage Requirements -- Install ZenML's Modal integration: +- Install ZenML Modal integration: ```shell zenml integration install modal ``` @@ -4079,14 +4054,14 @@ Utilize the Modal step operator if: - Set up a cloud artifact store and a cloud container registry compatible with ZenML. #### Registering the Step Operator -To register the step operator: +Register the step operator and update your stack: ```shell zenml step-operator register --flavor=modal zenml stack update -s ... ``` #### Executing Steps -To execute a step in Modal, use the `@step` decorator: +Use the step operator in the `@step` decorator: ```python from zenml import step @@ -4094,10 +4069,10 @@ from zenml import step def trainer(...) -> ...: """Train a model.""" ``` -ZenML will create a Docker image containing your code for execution in Modal. +ZenML builds a Docker image for execution in Modal. #### Additional Configuration -Specify hardware requirements using the `ResourceSettings` class: +Specify hardware requirements using `ResourceSettings`: ```python from zenml.config import ResourceSettings from zenml.integrations.modal.flavors import ModalStepOperatorSettings @@ -4115,116 +4090,90 @@ resource_settings = ResourceSettings(cpu=2, memory="32GB") def my_modal_step(): ... ``` -**Note**: The `cpu` parameter in `ResourceSettings` accepts a single integer, indicating a soft minimum limit. The minimum cost for the specified resources (2 CPUs and 32GB memory) is approximately $1.03 per hour. +- The `cpu` parameter in `ResourceSettings` is a soft minimum limit. +- Example cost for 2 CPUs and 32GB memory is approximately $1.03/hour. -#### Important Considerations -- The configuration will run `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. -- For supported GPU types and additional settings, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu) and [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-modal/#zenml.integrations.modal.flavors.modal_step_operator_flavor.ModalStepOperatorSettings). -- Advanced settings for region and cloud provider are available only for Modal Enterprise and Team plan customers. Use looser settings to avoid execution failures, and consult Modal's error messages for troubleshooting. More on region selection can be found [here](https://modal.com/docs/guide/region-selection). +This configuration runs `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. For supported GPU types, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu). + +#### Notes +- Region and cloud provider settings are available for Modal Enterprise and Team plans. +- Use looser settings to prevent execution failures; Modal provides detailed error messages for troubleshooting. For more on region selection, see the [Modal docs](https://modal.com/docs/guide/region-selection). ================================================== === File: docs/book/component-guide/step-operators/spark-kubernetes.md === -### Spark Integration Overview - -The `spark` integration in ZenML includes two main step operators: +### Summary of Spark Step Operators Documentation -1. **SparkStepOperator**: Base class for Spark-related step operators. -2. **KubernetesSparkStepOperator**: Launches ZenML steps as Spark applications on a Kubernetes cluster. +The `spark` integration in ZenML includes two main step operators: -### SparkStepOperator Configuration +1. **SparkStepOperator**: A base class for Spark-related step operators. +2. **KubernetesSparkStepOperator**: A subclass that launches ZenML steps as Spark applications on Kubernetes. -**Configuration Parameters**: -- `master`: URL for the Spark cluster (supports Mesos, YARN, Kubernetes). -- `deploy_mode`: Determines where the driver node runs; options are 'cluster' (default) or 'client'. -- `submit_kwargs`: JSON string for additional Spark parameters. - -**Implementation**: -```python -from typing import List, Optional, Dict, Any -from pyspark.conf import SparkConf -from zenml.step_operators import BaseStepOperatorConfig, BaseStepOperator +#### SparkStepOperator Configuration -class SparkStepOperatorConfig(BaseStepOperatorConfig): - master: str - deploy_mode: str = "cluster" - submit_kwargs: Optional[Dict[str, Any]] = None +The `SparkStepOperatorConfig` class defines key configuration parameters: -class SparkStepOperator(BaseStepOperator): - def launch(self, info: "StepRunInfo", entrypoint_command: List[str]) -> None: - # Launches the step on Spark - ... -``` +- `master`: The master URL for the Spark cluster (supports Kubernetes, Mesos, YARN). +- `deploy_mode`: Can be 'cluster' (default) or 'client', indicating where the driver node runs. +- `submit_kwargs`: Optional JSON string for additional Spark parameters. -**Key Methods**: -- `_resource_configuration`: Maps ZenML `ResourceSettings` to Spark configuration. -- `_backend_configuration`: Handles cluster-manager-specific configurations. -- `_io_configuration`: Configures input/output sources for Spark. +**Key Methods:** +- `_resource_configuration`: Configures Spark resources. +- `_backend_configuration`: Configures backend settings for cluster managers. +- `_io_configuration`: Configures input/output sources. - `_additional_configuration`: Appends user-defined parameters. - `_launch_spark_job`: Executes a Spark job using `spark-submit`. -**Note**: `_io_configuration` is effective with `S3ArtifactStore` and may require additional `submit_args` for other stores. +**Warning**: The `_io_configuration` method is effective only with `S3ArtifactStore` requiring authentication. -### KubernetesSparkStepOperator +#### KubernetesSparkStepOperator -This operator extends `SparkStepOperator` and utilizes `PipelineDockerImageBuilder` to manage Docker images. +This operator extends `SparkStepOperator` and includes additional configuration parameters: -**Configuration Parameters**: - `namespace`: Kubernetes namespace for driver and executor pods. - `service_account`: Service account for Spark components. -**Implementation**: -```python -class KubernetesSparkStepOperatorConfig(SparkStepOperatorConfig): - namespace: Optional[str] = None - service_account: Optional[str] = None +**Backend Configuration**: The `_backend_configuration` method is tailored for Kubernetes, adjusting Spark settings accordingly. -class KubernetesSparkStepOperator(SparkStepOperator): - def _backend_configuration(self, spark_config: SparkConf, step_config: "StepConfiguration") -> None: - # Configure Spark for Kubernetes - ... -``` +#### Usage Scenarios -### Usage Guidelines +Use the Spark step operator when: +- Handling large datasets. +- Designing steps that benefit from distributed computing. -**When to Use**: -- For large data processing. -- When leveraging distributed computing for efficiency. +#### Deployment Steps + +To deploy `KubernetesSparkStepOperator`, follow these steps: -**Deployment Requirements**: -1. **Remote ZenML Server**: Follow the deployment guide. -2. **Kubernetes Cluster**: Set up using cloud providers or custom infrastructure. +1. **Remote ZenML Server**: Refer to the deployment guide. +2. **Kubernetes Cluster**: Set up using various cloud providers or custom infrastructure. For AWS, follow the Spark EKS Setup Guide. **EKS Setup Guide**: -1. Create IAM roles for EKS. -2. Set up an EKS cluster and node group. -3. Build a Docker image for Spark drivers and executors using the `docker-image-tool`. +- Create IAM roles for EKS cluster and EC2 nodes. +- Create an EKS cluster and note the cluster name and API server endpoint. +- Add a node group with recommended instance types. -**RBAC Configuration**: -Create a `rbac.yaml` file for Kubernetes access and apply it using: -```bash -kubectl create -f rbac.yaml -``` +**Docker Image for Spark**: +- Use Spark's Docker images or build your own with the `docker-image-tool`. +- Download required packages (`hadoop-aws`, `aws-java-sdk-bundle`) and build the image. -### Registering and Using the Step Operator +**RBAC Configuration**: Create a `rbac.yaml` file for Kubernetes access and apply it using `kubectl`. -To use the `KubernetesSparkStepOperator`, ensure: -- ZenML `spark` integration is installed. -- Docker is running. -- Remote artifact store and container registry are configured. +#### Using the KubernetesSparkStepOperator + +To use the operator: +- Install the ZenML `spark` integration. +- Ensure Docker and a remote artifact store are set up. +- Register the step operator and stack: -**Register the Step Operator**: ```bash zenml step-operator register spark_step_operator \ - --flavor=spark-kubernetes \ - --master=k8s://$EKS_API_SERVER_ENDPOINT \ - --namespace= \ - --service_account= -``` + --flavor=spark-kubernetes \ + --master=k8s://$EKS_API_SERVER_ENDPOINT \ + --namespace= \ + --service_account= -**Register the Stack**: -```bash zenml stack register spark_stack \ -o default \ -s spark_step_operator \ @@ -4234,7 +4183,9 @@ zenml stack register spark_stack \ --set ``` -**Define a Step**: +**Defining Steps**: +Use the `@step` decorator to define steps: + ```python from zenml import step @@ -4243,18 +4194,15 @@ def step_on_spark(...) -> ...: ... ``` -**Dynamic Step Operator Usage**: -```python -from zenml.client import Client - -step_operator = Client().active_stack.step_operator +After execution, verify Spark driver pods with: -@step(step_operator=step_operator.name) -def step_on_spark(...) -> ...: - ... +```bash +kubectl get pods -n $KUBERNETES_NAMESPACE ``` -### Additional Configuration +**Dynamic Operator Usage**: Use the ZenML Client to dynamically reference the active stack's step operator. + +#### Additional Configuration For more configuration options, refer to `SparkStepOperatorSettings` and the SDK documentation for available attributes. @@ -4262,32 +4210,30 @@ For more configuration options, refer to `SparkStepOperatorSettings` and the SDK === File: docs/book/component-guide/step-operators/azureml.md === -### AzureML Step Operator in ZenML +### AzureML Step Operator Overview -**Overview**: AzureML provides compute instances for training jobs and a UI for model management. ZenML's AzureML step operator allows submission of individual steps to AzureML compute instances. +AzureML provides specialized compute instances for training jobs and a UI for model management. ZenML's AzureML step operator allows submission of individual pipeline steps to AzureML compute instances. -**When to Use**: -- If pipeline steps require computing resources not available in your orchestrator. -- If you have access to AzureML. For other cloud providers, consider SageMaker or Vertex step operators. +#### When to Use AzureML Step Operator +- When pipeline steps require computing resources not available from your orchestrator. +- If you have access to AzureML; for other cloud providers, consider SageMaker or Vertex step operators. -**Deployment Steps**: -1. Create an Azure Machine Learning workspace, including an Azure container registry and storage account. -2. (Optional) Create a compute instance or cluster in AzureML. -3. (Optional) Create a Service Principal for authentication if using a service connector. +#### Deployment Steps +1. **Create Azure Workspace**: Set up a Machine Learning workspace on Azure, including a container registry and storage account. +2. **(Optional) Create Compute Instance/Cluster**: Use Azure Machine Learning Studio to create a compute instance or cluster. If omitted, the operator will use serverless compute or provision a new target. +3. **(Optional) Create Service Principal**: For authentication via a service connector. -**Usage Requirements**: +#### Usage Requirements - Install ZenML Azure integration: ```shell zenml integration install azure ``` -- Docker must be installed and running. +- Ensure Docker is installed and running. - Set up an Azure container registry and artifact store. -- Ensure an AzureML workspace and optional compute cluster are available. -**Authentication Methods**: -1. **Service Connector** (recommended): - - Register a service connector and connect it to the AzureML step operator. - - Ensure the connector has permissions to manage AzureML jobs. +#### Authentication Methods +1. **Service Connector** (Recommended): + - Register a service connector with permissions to manage AzureML jobs. ```shell zenml service-connector register --type azure -i zenml step-operator register \ @@ -4300,10 +4246,10 @@ For more configuration options, refer to `SparkStepOperatorSettings` and the SDK ``` 2. **Implicit Authentication**: - - For local orchestrators, ZenML uses the Azure CLI configuration. + - For local orchestrators, ZenML uses Azure CLI configuration. - For remote orchestrators, ensure they can authenticate to Azure. -**Executing Steps**: +#### Executing Steps To execute steps in AzureML, specify the step operator in the `@step` decorator: ```python from zenml import step @@ -4312,20 +4258,15 @@ from zenml import step def trainer(...) -> ...: """Train a model.""" ``` +ZenML builds a Docker image for the pipeline. -**Docker Image**: ZenML builds a Docker image `/zenml:` for running steps in AzureML. - -**Configuration**: -Use `AzureMLStepOperatorSettings` to configure compute resources. It supports three modes: -1. **Serverless Compute** (default) -2. **Compute Instance**: - - Requires `compute_name`. - - Can create or use an existing instance. -3. **Compute Cluster**: - - Requires `compute_name`. - - Can create or use an existing cluster. +#### Additional Configuration +Use `AzureMLStepOperatorSettings` to configure compute resources: +- **Serverless Compute**: Default mode. +- **Compute Instance**: Requires `compute_name`, can create or use existing instance. +- **Compute Cluster**: Requires `compute_name`, can create or use existing cluster. -Example for a compute instance: +Example configuration for a compute instance: ```python from zenml.integrations.azure.flavors import AzureMLStepOperatorSettings @@ -4341,7 +4282,8 @@ def my_azureml_step(): ... ``` -**GPU Support**: For GPU usage, follow specific instructions to enable CUDA for optimal performance. +#### GPU Support +To run steps on GPU, follow specific instructions to enable CUDA for full acceleration. For further details, refer to the AzureML documentation and ZenML SDK documentation. @@ -4351,25 +4293,25 @@ For further details, refer to the AzureML documentation and ZenML SDK documentat # Step Operators -The step operator facilitates the execution of individual pipeline steps in specialized runtime environments optimized for specific workloads, allowing access to resources like GPUs or distributed processing frameworks (e.g., [Spark](https://spark.apache.org/)). +The step operator allows for executing individual pipeline steps in specialized environments optimized for specific workloads, such as those requiring GPUs or distributed processing frameworks like [Spark](https://spark.apache.org/). -### Comparison to Orchestrators -The orchestrator is a mandatory component that executes all pipeline steps in order and manages scheduling. In contrast, the step operator executes individual steps in separate environments when the orchestrator's environment is insufficient. +## Comparison to Orchestrators +The orchestrator is a mandatory component that executes all pipeline steps in order and provides scheduling features. In contrast, the step operator is used to execute individual steps in separate environments when the orchestrator's environment is insufficient. -### When to Use It -Use a step operator when pipeline steps require resources unavailable in the orchestrator's runtime environments. For instance, if a step needs a GPU for training a computer vision model while the orchestrator (e.g., [Kubeflow](../orchestrators/kubeflow.md)) lacks GPU nodes, a step operator like [SageMaker](sagemaker.md) or [Vertex](vertex.md) should be used. +## When to Use It +Use a step operator when pipeline steps require resources unavailable in the orchestrator's runtime environment. For example, if a step needs a GPU for training a computer vision model, but the orchestrator (like [Kubeflow](../orchestrators/kubeflow.md)) lacks GPU nodes, a step operator such as [SageMaker](sagemaker.md), [Vertex](vertex.md), or [AzureML](azureml.md) should be used. -### Step Operator Flavors +## Step Operator Flavors ZenML provides the following step operators for major cloud providers: -| Step Operator | Flavor | Integration | Notes | -|---------------|-------------|-------------|--------------------------------------| -| [AzureML](azureml.md) | `azureml` | `azure` | Executes steps using AzureML | -| [Kubernetes](kubernetes.md) | `kubernetes` | `kubernetes` | Executes steps using Kubernetes Pods | -| [Modal](modal.md) | `modal` | `modal` | Executes steps using Modal | -| [SageMaker](sagemaker.md) | `sagemaker` | `aws` | Executes steps using SageMaker | -| [Spark](spark-kubernetes.md) | `spark` | `spark` | Executes steps in a distributed manner using Spark | -| [Vertex](vertex.md) | `vertex` | `gcp` | Executes steps using Vertex AI | +| Step Operator | Flavor | Integration | Notes | +|---------------|-------------|-------------|----------------------------------------| +| [AzureML](azureml.md) | `azureml` | `azure` | Executes steps using AzureML | +| [Kubernetes](kubernetes.md) | `kubernetes` | `kubernetes` | Executes steps using Kubernetes Pods | +| [Modal](modal.md) | `modal` | `modal` | Executes steps using Modal | +| [SageMaker](sagemaker.md) | `sagemaker` | `aws` | Executes steps using SageMaker | +| [Spark](spark-kubernetes.md) | `spark` | `spark` | Executes steps in a distributed manner using Spark on Kubernetes | +| [Vertex](vertex.md) | `vertex` | `gcp` | Executes steps using Vertex AI | | [Custom Implementation](custom.md) | _custom_ | | Allows for custom step operator implementations | To view available flavors, use: @@ -4377,8 +4319,8 @@ To view available flavors, use: zenml step-operator flavor list ``` -### How to Use It -You don't need to interact directly with ZenML step operators in your code. If the desired step operator is part of your active [ZenML stack](../../user-guide/production-guide/understand-stacks.md), specify it in the `@step` decorator: +## How to Use It +You do not need to interact directly with ZenML step operators in your code. If the desired step operator is part of your active [ZenML stack](../../user-guide/production-guide/understand-stacks.md), specify it in the `@step` decorator: ```python from zenml import step @@ -4388,64 +4330,64 @@ def my_step(...) -> ...: ... ``` -#### Specifying Per-Step Resources +### Specifying Per-Step Resources For additional hardware resources, specify them in your steps as detailed [here](../../how-to/pipeline-development/training-with-gpus/README.md). -#### Enabling CUDA for GPU Hardware -To run steps on a GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full GPU acceleration. +### Enabling CUDA for GPU-Backed Hardware +To run steps on a GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full acceleration. ================================================== === File: docs/book/component-guide/step-operators/vertex.md === -### Summary: Executing Individual Steps in Vertex AI +### Summary of Executing Steps in Vertex AI -**Overview**: Google Cloud Vertex AI provides specialized compute instances for training jobs and a UI for model management. ZenML's Vertex AI step operator allows submission of individual steps to Vertex AI compute instances. +**Overview**: +Google Cloud's Vertex AI provides specialized compute instances for training jobs and a UI for model management. ZenML's Vertex AI step operator allows submission of individual pipeline steps to Vertex AI compute instances. -#### When to Use +**When to Use**: - Use the Vertex step operator if: - - Your pipeline steps require more compute resources than your orchestrator provides. + - Your pipeline steps require additional computing resources (CPU, GPU, memory). - You have access to Vertex AI. -#### Deployment Steps -1. **Enable Vertex AI**: [Enable here](https://console.cloud.google.com/vertex-ai). -2. **Create a Service Account**: Grant permissions for Vertex AI jobs (`roles/aiplatform.admin`) and container registry (`roles/storage.admin`). +**Deployment Steps**: +1. Enable Vertex AI. +2. Create a service account with permissions for Vertex AI jobs (`roles/aiplatform.admin`) and container registry access (`roles/storage.admin`). -#### Usage Requirements -- Install ZenML GCP integration: +**Usage Requirements**: +- Install ZenML's GCP integration: ```shell zenml integration install gcp ``` -- Install and run Docker. +- Ensure Docker is installed and running. - Enable Vertex AI and have a service account file. - Set up a GCR container registry. -- (Optional) Specify a machine type (default: `n1-standard-4`). -- Set up a remote artifact store for reading/writing step artifacts. +- Optionally, specify a machine type (default is `n1-standard-4`). +- Configure a remote artifact store for artifact management. -#### Authentication Options +**Authentication Options**: 1. **Using `gcloud` CLI**: ```shell gcloud auth login zenml step-operator register --flavor=vertex --project= --region= ``` -2. **Using Service Account Key File**: +2. **Using a service account key file**: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account_path= ``` -3. **Using GCP Service Connector** (recommended): +3. **Using a GCP Service Connector** (recommended): ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@ zenml step-operator register --flavor=vertex --region= zenml step-operator connect --connector ``` -#### Update Active Stack -Add the step operator to the active stack: +**Updating the Active Stack**: ```shell zenml stack update -s ``` -#### Define Steps +**Defining Steps**: Use the registered step operator in your pipeline: ```python from zenml import step @@ -4455,14 +4397,16 @@ def trainer(...) -> ...: """Train a model.""" ``` -#### Additional Configuration +**Docker Image**: ZenML builds a Docker image named `/zenml:` for running steps in Vertex AI. + +**Additional Configuration**: Specify service account, network, and reserved IP ranges: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account= --network= --reserved_ip_ranges= ``` -#### Custom Settings -Pass `VertexStepOperatorSettings` for additional configurations: +**VertexStepOperatorSettings**: +Customize settings for the step operator: ```python from zenml import step from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings @@ -4478,22 +4422,21 @@ def trainer(...) -> ...: """Train a model.""" ``` -#### CUDA for GPU -Follow the instructions for enabling CUDA on GPU-backed hardware to ensure full acceleration. +**CUDA for GPU**: Follow specific instructions to enable CUDA for GPU acceleration when using the step operator. -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.step_operators.vertex_step_operator.VertexStepOperator). +For further details, refer to the SDK documentation for available attributes and configuration options. ================================================== === File: docs/book/component-guide/step-operators/custom.md === -### Developing a Custom Step Operator in ZenML +### Summary: Developing a Custom Step Operator in ZenML #### Overview -To create a custom step operator in ZenML, it's essential to understand the general concepts of writing custom components. Refer to the [general guide](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge. +To develop a custom step operator in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction -The `BaseStepOperator` is the abstract class for running pipeline steps in a separate environment. It provides a basic interface: +The `BaseStepOperator` is the abstract class for running pipeline steps in separate environments. It provides a basic interface: ```python from abc import ABC, abstractmethod @@ -4506,51 +4449,63 @@ class BaseStepOperatorConfig(StackComponentConfig): """Base config for step operators.""" class BaseStepOperator(StackComponent, ABC): - """Base class for ZenML step operators.""" + """Base class for all ZenML step operators.""" @abstractmethod def launch(self, info: StepRunInfo, entrypoint_command: List[str]) -> None: - """Executes a step in a synchronous job.""" -``` - -#### Creating a Custom Step Operator -To build a custom flavor for a step operator, follow these steps: + """Execute a step synchronously.""" + +class BaseStepOperatorFlavor(Flavor): + """Base class for all ZenML step operator flavors.""" -1. **Subclass `BaseStepOperator`:** Implement the `launch` method to prepare the execution environment (e.g., Docker image) and run the entrypoint command. Ensure all necessary dependencies are installed and the source code is accessible. + @property + @abstractmethod + def name(self) -> str: + """Returns the name of the flavor.""" + + @property + def type(self) -> StackComponentType: + return StackComponentType.STEP_OPERATOR -2. **Handle Resources:** If applicable, manage resources defined in `info.config.resource_settings`. + @property + def config_class(self) -> Type[BaseStepOperatorConfig]: + return BaseStepOperatorConfig -3. **Configuration Class:** Create a class inheriting from `BaseStepOperatorConfig` to add configuration parameters. + @property + @abstractmethod + def implementation_class(self) -> Type[BaseStepOperator]: + """Returns the implementation class for this flavor.""" +``` -4. **Flavor Class:** Inherit from `BaseStepOperatorFlavor`, providing a name through its abstract property. Register the flavor via CLI: +#### Steps to Create a Custom Step Operator +1. **Subclass `BaseStepOperator`**: Implement the `launch` method to prepare the execution environment (e.g., Docker) and run the `entrypoint_command`. +2. **Handle Resources**: Manage resources specified in `info.config.resource_settings`. +3. **Configuration Class**: Create a class inheriting from `BaseStepOperatorConfig` for custom parameters. +4. **Flavor Class**: Inherit from `BaseStepOperatorFlavor`, providing a name for your flavor. - ```shell - zenml step-operator flavor register - ``` - - Example registration command: - - ```shell - zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor - ``` +**Registering the Flavor**: +```shell +zenml step-operator flavor register +``` +Example: +```shell +zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor +``` #### Important Notes -- Ensure ZenML is initialized at the root of your repository to avoid resolution issues. -- After registration, verify the flavor is available with: - - ```shell - zenml step-operator flavor list - ``` +- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. +- After registration, list available flavors: +```shell +zenml step-operator flavor list +``` #### Interaction in ZenML Workflow -- The **CustomStepOperatorFlavor** is used during flavor creation. -- The **CustomStepOperatorConfig** validates user input during registration. -- The **CustomStepOperator** is utilized when the component is in action, allowing separation of flavor configuration from implementation. +- `CustomStepOperatorFlavor` is used during flavor creation. +- `CustomStepOperatorConfig` validates user inputs during registration. +- `CustomStepOperator` is utilized when the component is in use, allowing separation of configuration from implementation. #### Enabling GPU Support -For GPU execution, follow the instructions on [enabling CUDA](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure proper acceleration. - -This summary encapsulates the essential steps and technical details for developing a custom step operator in ZenML while omitting redundant explanations. +For GPU execution, follow the instructions to enable CUDA for full acceleration. Refer to [GPU training documentation](../../how-to/pipeline-development/training-with-gpus/README.md). ================================================== @@ -4558,19 +4513,19 @@ This summary encapsulates the essential steps and technical details for developi ### Slack Alerter Documentation Summary -The `SlackAlerter` allows sending messages and questions to a Slack channel from ZenML pipelines. +**Overview**: The `SlackAlerter` allows sending messages and questions to a specified Slack channel from ZenML pipelines. #### Setup Instructions 1. **Create a Slack App**: - Set up a Slack workspace and create a Slack App with a bot. - - Grant the following permissions in the `OAuth & Permissions` tab: + - Assign the following permissions in the `OAuth & Permissions` tab: - `chat:write` - `channels:read` - `channels:history` - - Invite the app to your desired channel. + - Invite the app to your desired channel using `/invite` or through channel settings. -2. **Register Slack Alerter in ZenML**: +2. **Registering Slack Alerter in ZenML**: - Install the Slack integration: ```shell zenml integration install slack -y @@ -4583,6 +4538,12 @@ The `SlackAlerter` allows sending messages and questions to a Slack channel from --slack_token={{slack_token.oauth_token}} \ --slack_channel_id= ``` + - Find `` in channel details (starts with `C....`) and `` in app settings. + +3. **Add Alerter to Stack**: + ```shell + zenml stack register ... -al slack_alerter --set + ``` #### Usage @@ -4610,15 +4571,15 @@ The `SlackAlerter` allows sending messages and questions to a Slack channel from ``` 2. **Custom Settings**: - - Modify settings during runtime: + - Use different channel IDs: ```python @step(settings={"alerter": {"slack_channel_id": }}) def post_statement() -> None: Client().active_stack.alerter.post("Posting to another channel!") ``` -3. **Advanced Message Formatting**: - - Use `SlackAlerterParameters` and `SlackAlerterPayload` for detailed messages: +3. **Using `SlackAlerterParameters` and `SlackAlerterPayload`**: + - Customize messages: ```python from zenml import pipeline, step, get_step_context from zenml.client import Client @@ -4636,7 +4597,7 @@ The `SlackAlerter` allows sending messages and questions to a Slack channel from ), ) Client().active_stack.alerter.post( - message="Pipeline info.", + message="This is a message with additional information.", params=params ) ``` @@ -4647,7 +4608,7 @@ The `SlackAlerter` allows sending messages and questions to a Slack channel from from zenml import pipeline from zenml.integrations.slack.steps import ( slack_alerter_post_step, - slack_alerter_ask_step, + slack_alerter_ask_step ) @pipeline(enable_cache=False) @@ -4665,22 +4626,22 @@ For further details and configurable attributes, refer to the [SDK Docs](https:/ === File: docs/book/component-guide/alerters/alerters.md === -### Alerters Documentation Summary +### Alerters -**Alerters** enable sending messages to chat services (e.g., Slack, Discord) from pipelines, facilitating immediate notifications for failures, monitoring, and human-in-the-loop ML. +**Alerters** enable sending messages to chat services (e.g., Slack, Discord) from pipelines, facilitating notifications for failures, monitoring, and human-in-the-loop ML. #### Alerter Flavors -Currently available integrations: -- **SlackAlerter**: Interacts with Slack channels. -- **DiscordAlerter**: Interacts with Discord channels. -- **Custom Implementation**: Extend the alerter abstraction for other chat services. +Currently available integrations include: +- **SlackAlerter**: Interacts with a Slack channel. +- **DiscordAlerter**: Interacts with a Discord channel. +- **Custom Implementation**: Extend the alerter abstraction for other services. -To view available alerter flavors in the terminal, use: +To view available alerter flavors, use: ```shell zenml alerter flavor list ``` -#### Using Alerters with ZenML +#### Usage 1. **Register an Alerter**: ```shell zenml alerter register ... @@ -4689,7 +4650,7 @@ zenml alerter flavor list ```shell zenml stack register ... -al ``` -3. **Import and Use**: Import standard steps from the integration and utilize them in your pipelines. +3. **Import and Use**: Import the standard steps from the respective integration for use in pipelines. ================================================== @@ -4697,49 +4658,49 @@ zenml alerter flavor list ### Discord Alerter Overview -The `DiscordAlerter` allows sending automated messages to a Discord channel from ZenML pipelines. It includes two main steps: +The `DiscordAlerter` allows sending messages to a Discord channel from ZenML pipelines. It includes two key steps: 1. **`discord_alerter_post_step`**: Sends a message to a Discord channel and returns success status. -2. **`discord_alerter_ask_step`**: Sends a message and waits for user feedback, returning `True` only if the user approves the operation. +2. **`discord_alerter_ask_step`**: Sends a message and waits for user feedback, returning `True` only if the user approves the action. #### Use Cases - Immediate notifications for failures (e.g., model performance issues). -- Human-in-the-loop integration for critical pipeline steps (e.g., model deployment). +- Human-in-the-loop integration before executing critical steps (e.g., model deployment). ### Requirements -To use the `DiscordAlerter`, install the Discord integration: - +Install the Discord integration: ```shell zenml integration install discord -y ``` ### Setting Up a Discord Bot -1. Create a Discord workspace with a channel for notifications. -2. Create a Discord App with a bot and obtain the bot token. Ensure the bot has permissions to send and receive messages. +1. Create a Discord workspace and channel. +2. Create a Discord App with a bot. Ensure the bot has permissions to send and receive messages. ### Registering a Discord Alerter Register the `discord` alerter in ZenML: - ```shell zenml alerter register discord_alerter \ --flavor=discord \ --discord_token= \ --default_discord_channel_id= ``` - Add the alerter to your stack: - ```shell zenml stack register ... -al discord_alerter ``` -#### Important Parameters -- **DISCORD_CHANNEL_ID**: Obtain by right-clicking the channel and selecting 'Copy Channel ID'. Enable Developer Mode in settings if not visible. -- **DISCORD_TOKEN**: Found during bot setup; ensure the bot has necessary permissions (Read Messages, Send Messages). +#### Parameters +- **DISCORD_CHANNEL_ID**: Copy from the channel settings (enable Developer Mode if not visible). +- **DISCORD_TOKEN**: Obtain from the bot setup instructions. -### Using the Discord Alerter -After configuration, import and use the steps in your pipeline. A typical implementation might look like this: +**Permissions Required**: +- Read Messages/View Channels +- Send Messages +- Send Messages in Threads +### Using the Discord Alerter +Import the steps in your pipeline: ```python from zenml.integrations.discord.steps.discord_alerter_ask_step import discord_alerter_ask_step from zenml import step, pipeline @@ -4753,13 +4714,13 @@ def my_pipeline(...): ... message = my_formatter_step(artifact_to_be_communicated) approved = discord_alerter_ask_step(message) - ... # Subsequent steps based on `approved` + ... # Conditional behavior based on `approved` if __name__ == "__main__": my_pipeline() ``` -For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-discord/#zenml.integrations.discord.alerters.discord_alerter.DiscordAlerter). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-discord/#zenml.integrations.discord.alerters.discord_alerter.DiscordAlerter). ================================================== @@ -4767,11 +4728,10 @@ For more details on configurable attributes, refer to the [SDK Docs](https://sdk ### Develop a Custom Alerter -#### Overview -To create a custom alerter in ZenML, familiarize yourself with the general guide on writing custom component flavors. +Before creating a custom alerter, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction -The base class for alerters, `BaseAlerter`, defines two abstract methods: +The base abstraction for alerters includes two abstract methods: - `post(message: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message to a chat service, returning `True` if successful. - `ask(question: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message and waits for approval, returning `True` only if approved. @@ -4784,8 +4744,10 @@ class BaseAlerter(StackComponent, ABC): return True ``` -#### Creating a Custom Alerter -1. **Implement the Alerter**: Inherit from `BaseAlerter` and implement `post()` and `ask()`. +#### Building Your Own Custom Alerter +Creating a custom alerter involves three steps: + +1. **Inherit from `BaseAlerter`** and implement `post()` and `ask()` methods: ```python from typing import Optional @@ -4794,14 +4756,14 @@ from zenml.alerter import BaseAlerter, BaseAlerterStepParameters class MyAlerter(BaseAlerter): def post(self, message: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... - return True + return "Hey, I implemented an alerter." def ask(self, question: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... return True ``` -2. **Configuration**: Optionally, create a configuration class. +2. **Implement a configuration object** if needed: ```python from zenml.alerter.base_alerter import BaseAlerterConfig @@ -4810,12 +4772,15 @@ class MyAlerterConfig(BaseAlerterConfig): my_param: str ``` -3. **Flavor Object**: Combine implementation and configuration in a flavor class. +3. **Create a flavor object** that combines implementation and configuration: ```python from typing import Type, TYPE_CHECKING from zenml.alerter import BaseAlerterFlavor +if TYPE_CHECKING: + from zenml.stack import StackComponent, StackComponentConfig + class MyAlerterFlavor(BaseAlerterFlavor): @property def name(self) -> str: @@ -4833,7 +4798,7 @@ class MyAlerterFlavor(BaseAlerterFlavor): ``` #### Registering the Custom Alerter -Register your flavor using the CLI with dot notation: +Register your new flavor via the CLI: ```shell zenml alerter flavor register @@ -4845,34 +4810,34 @@ Example registration: zenml alerter flavor register flavors.my_flavor.MyAlerterFlavor ``` -#### Important Notes -- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- After registration, list available alerter flavors: +**Important Note**: Ensure ZenML is initialized at the root of your repository for proper flavor resolution. + +After registration, list available alerter flavors: ```shell zenml alerter flavor list ``` -#### Workflow Integration -- The `MyAlerterFlavor` is used during flavor creation. -- The `MyAlerterConfig` is utilized when registering or updating a stack component. -- The `MyAlerter` is invoked when the component is in use, allowing separation of configuration from implementation. +#### Key Points +- **MyAlerterFlavor** is used during flavor creation. +- **MyAlerterConfig** is used for validating values during stack component registration. +- **MyAlerter** is utilized when the component is in use, allowing separation of configuration from implementation. -This modular design enables registration of flavors and components without requiring all dependencies to be installed locally. +This design enables registration of flavors and components even if their dependencies are not installed locally. ================================================== === File: docs/book/component-guide/artifact-stores/azure.md === -### Azure Blob Storage for ZenML Artifacts +### Azure Blob Storage Artifact Store -The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage to store artifacts. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. +The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage to store ZenML artifacts. It is ideal for scenarios where local storage is insufficient, such as when sharing results, using remote components, or scaling production-grade MLOps. -#### Use Cases -- **Sharing Results**: Ideal for sharing pipeline results with team members or stakeholders. -- **Remote Components**: Necessary when using remote orchestrators like Kubeflow or Kubernetes. -- **Storage Limitations**: When local storage is insufficient. -- **Scalability**: Required for running pipelines at scale. +#### When to Use +- **Collaboration**: Share pipeline results with team members. +- **Remote Components**: Integrate with cloud-based orchestrators (e.g., Kubeflow). +- **Storage Limitations**: Overcome local storage constraints. +- **Scalability**: Handle production-scale demands. #### Deployment Steps 1. **Install Azure Integration**: @@ -4888,120 +4853,110 @@ The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage ``` #### Authentication Methods -- **Implicit Authentication**: Quick setup using environment variables for Azure credentials. -- **Azure Service Connector**: Recommended for better security and integration with other Azure components. - -##### Implicit Authentication Setup -Set environment variables: -- For storage account key: - ```shell - export AZURE_STORAGE_ACCOUNT_NAME= - export AZURE_STORAGE_ACCOUNT_KEY= - ``` -- For service principal: - ```shell - export AZURE_STORAGE_CLIENT_ID= - export AZURE_STORAGE_CLIENT_SECRET= - export AZURE_STORAGE_TENANT_ID= - ``` - -##### Azure Service Connector Setup -1. Register the service connector: - ```sh +- **Implicit Authentication**: Quick local setup without explicit credentials. Set environment variables for Azure account key, connection string, or service principal credentials. +- **Azure Service Connector**: Recommended for better security and integration with remote components. Register using: + ```shell zenml service-connector register --type azure -i ``` -2. Connect to a blob storage container: - ```sh + Or configure with service principal: + ```shell zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type blob-container --resource-id ``` -#### Connecting Azure Artifact Store -After setting up the service connector, connect the artifact store: -```sh +#### Connecting the Artifact Store +After setting up the Azure Service Connector, connect it to the Azure Artifact Store: +```shell zenml artifact-store connect -i ``` +For non-interactive connection: +```shell +zenml artifact-store connect --connector +``` #### Using ZenML Secrets -Store Azure credentials in a ZenML secret: +You can store Azure credentials in a ZenML Secret for better management: ```shell zenml secret create az_secret --account_name='' --account_key='' ``` -Register the artifact store with the secret: +Register the Artifact Store with the secret: ```shell zenml artifact-store register az_store -f azure --path='az://your-container' --authentication_secret=az_secret ``` #### Usage -Using the Azure Artifact Store is similar to other artifact stores in ZenML. For detailed implementation, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). +Using the Azure Artifact Store is similar to other Artifact Store types in ZenML. For detailed configuration and usage, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). ================================================== === File: docs/book/component-guide/artifact-stores/s3.md === -### Summary: Storing Artifacts in an AWS S3 Bucket +### Summary of AWS S3 Artifact Store Documentation #### Overview -The S3 Artifact Store in ZenML integrates with AWS S3 or compatible services (e.g., MinIO, Ceph RGW) for artifact storage. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. +The S3 Artifact Store is a ZenML integration that utilizes AWS S3 or compatible services (like MinIO or Ceph RGW) for artifact storage. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. -#### When to Use S3 Artifact Store -- Share pipeline results with team members. -- Integrate with remote components (e.g., Kubeflow, Kubernetes). -- Scale beyond local storage limitations. -- Manage production-level MLOps. +#### Use Cases +Consider using the S3 Artifact Store when: +- You need to share pipeline results. +- Components are running in the cloud. +- Local storage is insufficient. +- Running pipelines at scale. #### Deployment Steps -1. **Install S3 Integration**: +1. **Install the S3 Integration**: ```shell zenml integration install s3 -y ``` -2. **Register S3 Artifact Store**: - - **URI Format**: `s3://bucket-name` - - **Register Command**: +2. **Register the S3 Artifact Store**: + - Mandatory parameter: `--path=s3://bucket-name`. + - Example: ```shell zenml artifact-store register s3_store -f s3 --path=s3://bucket-name + ``` + +3. **Set Up a Stack**: + ```shell zenml stack register custom_stack -a s3_store ... --set ``` -3. **Authentication**: - - **Implicit Authentication**: Quick local setup using AWS CLI credentials. - - **AWS Service Connector** (recommended): For better security and access control. - - Register: - ```sh - zenml service-connector register --type aws -i - ``` - - Non-interactive: - ```sh - zenml service-connector register --type aws --resource-type s3-bucket --resource-name --auto-configure - ``` +#### Authentication Methods +- **Implicit Authentication**: Quick local setup using AWS CLI credentials. + - Limitations: Some dashboard functionalities may not work, and remote components may face access issues. -4. **Connect S3 Artifact Store**: - ```sh - zenml artifact-store connect -i - ``` - - Non-interactive: - ```sh - zenml artifact-store connect --connector - ``` +- **AWS Service Connector (Recommended)**: Provides better security and access management. + - Register using: + ```shell + zenml service-connector register --type aws -i + ``` + - Connect to a bucket: + ```shell + zenml artifact-store connect -i + ``` -5. **Using ZenML Secret for Authentication**: - - Create a ZenML secret for AWS access keys: - ```shell - zenml secret create s3_secret --aws_access_key_id='' --aws_secret_access_key='' - ``` - - Register S3 Artifact Store with the secret: - ```shell - zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret - ``` +#### ZenML Secret Management +You can store AWS access keys in ZenML secrets for enhanced security: +```shell +zenml secret create s3_secret --aws_access_key_id='' --aws_secret_access_key='' +``` +Register the artifact store with the secret: +```shell +zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret +``` #### Advanced Configuration -- Use `client_kwargs`, `config_kwargs`, and `s3_additional_kwargs` for custom settings: +You can customize connections using: +- `client_kwargs`: For parameters like `endpoint_url`. +- `config_kwargs`: For advanced botocore client settings. +- `s3_additional_kwargs`: For S3 API-specific parameters. + +Example: ```shell zenml artifact-store register minio_store -f s3 --path='s3://minio_bucket' --authentication_secret=s3_secret --client_kwargs='{"endpoint_url": "http://minio.cluster.local:9000", "region_name": "us-east-1"}' ``` #### Usage -Using the S3 Artifact Store is similar to other Artifact Store flavors in ZenML. For detailed usage, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). +Using the S3 Artifact Store is similar to other artifact stores in ZenML. For detailed usage, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). ================================================== @@ -5009,43 +4964,42 @@ Using the S3 Artifact Store is similar to other Artifact Store flavors in ZenML. ### Local Artifact Store -The Local Artifact Store in ZenML is a built-in option for storing artifacts on your local filesystem. +The Local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) that utilizes a local filesystem folder for artifact storage. #### Use Cases -- Ideal for beginners or those in the experimental phase of using ZenML. -- Does not require additional resources or managed object-store services like Amazon S3 or Google Cloud Storage. -- Not suitable for production due to limitations in sharing, accessibility, and lack of features like high-availability and backup. +- Ideal for beginners or evaluations of ZenML without needing additional resources or managed object-store services (e.g., Amazon S3, Google Cloud Storage). +- Not suitable for production due to lack of sharing capabilities, access from other machines, and essential features like high-availability and scalability. #### Limitations -- Only compatible with local orchestrators (e.g., local, local Kubeflow, local Kubernetes) and local model deployers (e.g., MLflow). -- Step Operators cannot be used with a local Artifact Store. +- Only compatible with local Orchestrators (e.g., local, local Kubeflow, local Kubernetes) and local Model Deployers (e.g., MLflow). +- Does not support Step Operators that run in remote environments. -Transitioning to a team or production environment requires replacing the local Artifact Store with a more robust option, without code changes. +Transitioning to a team or production setting requires replacing the Local Artifact Store with a more suitable option without code changes. #### Deployment -The default stack in ZenML includes a local Artifact Store. You can view the current configuration with: +The default stack in ZenML includes a Local Artifact Store: ```shell $ zenml stack list $ zenml artifact-store describe ``` -Artifacts are stored in a specified path on your local filesystem, which can be customized during registration: +Artifacts are stored in a specified local path. You can create additional instances: ```shell -# Register the local artifact store +# Register a local artifact store zenml artifact-store register custom_local --flavor local # Register and set a stack with the new artifact store zenml stack register custom_stack -o default -a custom_local --set ``` -**Note:** It is recommended to use the default path to avoid unexpected issues, as other components rely on this convention. +**Note:** The Local Artifact Store accepts a `path` parameter during registration, but using the default path is recommended to avoid issues. -For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). +For detailed implementation and configuration, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). #### Usage -Using the local Artifact Store is similar to using any other Artifact Store flavor in ZenML. +Using the Local Artifact Store is similar to other Artifact Store flavors, with artifacts stored locally. ================================================== @@ -5053,13 +5007,14 @@ Using the local Artifact Store is similar to using any other Artifact Store flav ### Google Cloud Storage (GCS) Artifact Store -The GCS Artifact Store is a ZenML integration that utilizes Google Cloud Storage (GCS) for storing artifacts. It is suitable for scenarios where local storage is inadequate, such as when sharing results, using remote components, or scaling MLOps pipelines. +The GCS Artifact Store is a ZenML integration that utilizes Google Cloud Storage (GCS) to store artifacts. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. -#### When to Use GCS Artifact Store -- Sharing pipeline results with team members or stakeholders. -- Integrating with remote components (e.g., Kubeflow, Kubernetes). -- Needing more storage than local machines can provide. -- Running production-grade MLOps pipelines. +#### Use Cases +Consider using GCS when: +- You need to share pipeline results with team members or stakeholders. +- Your stack includes remote components (e.g., Kubeflow). +- Local storage is insufficient. +- You require scalable storage for production pipelines. #### Deployment Steps 1. **Install GCP Integration**: @@ -5068,39 +5023,41 @@ The GCS Artifact Store is a ZenML integration that utilizes Google Cloud Storage ``` 2. **Register GCS Artifact Store**: - - The mandatory configuration is the root path URI in the format `gs://bucket-name`. + The mandatory parameter is the root path URI in the format `gs://bucket-name`. ```shell zenml artifact-store register gs_store -f gcp --path=gs://bucket-name zenml stack register custom_stack -a gs_store ... --set ``` #### Authentication Methods -- **Implicit Authentication**: Quick setup using local GCP CLI credentials. Requires Google Cloud CLI installation. Limited functionality with ZenML server. +Authentication is necessary for using GCS. Options include: + +- **Implicit Authentication**: Quick local setup using Google Cloud CLI. Requires local credentials but may limit functionality with remote components. -- **GCP Service Connector (Recommended)**: Offers better security and configuration options. Register using: - ```shell +- **GCP Service Connector (Recommended)**: Provides better security and configuration. Register a service connector: + ```sh zenml service-connector register --type gcp -i ``` - For auto-configuration targeting a specific GCS bucket: - ```shell + Or for a specific bucket: + ```sh zenml service-connector register --type gcp --resource-type gcs-bucket --resource-name --auto-configure ``` #### Connecting GCS Artifact Store -After setting up the GCP Service Connector, connect it to the GCS Artifact Store: -```shell +After setting up the service connector, connect the GCS Artifact Store: +```sh zenml artifact-store register -f gcp --path='gs://your-bucket' zenml artifact-store connect -i ``` For non-interactive connection: -```shell +```sh zenml artifact-store connect --connector ``` #### Using GCP Credentials -Alternatively, use a GCP Service Account Key stored in a ZenML Secret: -1. Create a service account and grant it the `Storage Object Admin` role. -2. Register the secret: +You can also use a GCP Service Account Key stored in a ZenML Secret: +1. Create a GCP service account with necessary permissions. +2. Store the key: ```shell zenml secret create gcp_secret --token=@path/to/service_account_key.json ``` @@ -5110,87 +5067,98 @@ Alternatively, use a GCP Service Account Key stored in a ZenML Secret: ``` #### Usage -Using the GCS Artifact Store is similar to other Artifact Store flavors. For detailed information, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). +Once set up, using the GCS Artifact Store is similar to any other Artifact Store in ZenML. + +For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/artifact-stores.md === -### Summary of Artifact Stores Documentation +# Artifact Stores -**Artifact Store Overview** -- The Artifact Store is a crucial component in MLOps, serving as a data persistence layer for artifacts generated by machine learning pipelines (e.g., datasets, models). -- ZenML automatically serializes and saves artifacts in the Artifact Store, enabling features like caching, lineage tracking, and reproducibility. -- Not all pipeline step outputs are stored; this depends on the implementation of the associated **Materializer**. +## Overview +The Artifact Store is a crucial component of the MLOps stack, serving as a persistence layer for artifacts (datasets, models) generated by machine learning pipelines. ZenML automatically serializes and saves these artifacts, enabling features like caching, lineage tracking, and reproducibility. -**Customization** -- Custom Materializers can be created to store specific artifacts in different mediums (e.g., external model registries). -- For entirely different storage backends not covered by ZenML, users can extend the Artifact Store abstraction. +## Key Points +- **Materializers**: Determine how artifacts are serialized and stored. Most default Materializers use the active Stack's Artifact Store. Custom Materializers can be created for specific storage needs. +- **Storage Options**: The Artifact Store can be extended to support different storage backends beyond the default options. -**Usage** -- The Artifact Store is mandatory in ZenML stacks and must be configured for all pipelines. -- ZenML provides high-level APIs for storing and retrieving artifacts, minimizing direct interaction with the Artifact Store. +## When to Use +The Artifact Store is mandatory in ZenML stacks for storing all artifacts produced by pipeline runs. + +## Artifact Store Flavors +ZenML includes various Artifact Store flavors: +| Artifact Store | Flavor | Integration | URI Schema(s) | Notes | +|----------------|--------|-------------|----------------|-------| +| Local | local | built-in | None | Default store for local filesystem. | +| Amazon S3 | s3 | s3 | s3:// | Uses AWS S3 for storage. | +| Google Cloud | gcp | gcp | gs:// | Uses Google Cloud Storage. | +| Azure | azure | azure | abfs://, az:// | Uses Azure Blob Storage. | +| Custom | custom | | custom | Extend the Artifact Store abstraction. | + +To list available flavors: +```shell +zenml artifact-store flavor list +``` -**Artifact Store Flavors** -- ZenML includes a default `local` artifact store and supports various integrations: - - **Local**: Stores artifacts on the local filesystem. - - **Amazon S3**: Uses S3 as an object store. - - **Google Cloud Storage**: Uses GCP for storage. - - **Azure**: Uses Azure Blob Storage. - - **Custom Implementation**: Allows for user-defined storage solutions. +### Configuration +Each Artifact Store requires a `path` attribute, a URI pointing to the storage root. For example, to register an S3 store: +```shell +zenml artifact-store register s3_store -f s3 --path s3://my_bucket +``` -**Configuration Example** -- Registering an S3 Artifact Store: - ```shell - zenml artifact-store register s3_store -f s3 --path s3://my_bucket - ``` +## Usage +Typically, users interact with higher-level APIs to store and retrieve artifacts: +- Return objects from pipeline steps to save them automatically. +- Retrieve artifacts post-pipeline run. -**Artifact Store API** -- All Artifact Stores implement a standard IO API similar to file systems, allowing for consistent object manipulation. -- Access low-level API via: - - `zenml.io.fileio`: For file operations (e.g., `open`, `copy`, `remove`). - - `zenml.utils.io_utils`: Higher-level utilities for file transfers. +### Low-Level API +The Artifact Store API resembles a file system, allowing standard file operations. Access it via: +- `zenml.io.fileio`: Low-level utilities for object manipulation (e.g., `open`, `copy`, `remove`). +- `zenml.utils.io_utils`: Higher-level utilities for transferring objects between the Artifact Store and local storage. -**Example Code Snippets** -- Writing to the Artifact Store: - ```python - import os - from zenml.client import Client - from zenml.io import fileio +#### Example: Writing to Artifact Store +```python +import os +from zenml.client import Client +from zenml.io import fileio - root_path = Client().active_stack.artifact_store.path - artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") - fileio.makedirs(os.path.dirname(artifact_uri)) - with fileio.open(artifact_uri, "w") as f: - f.write("example artifact") - ``` +root_path = Client().active_stack.artifact_store.path +artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") +fileio.makedirs(os.path.dirname(artifact_uri)) +with fileio.open(artifact_uri, "w") as f: + f.write("example artifact") +``` -- Reading from the Artifact Store: - ```python - from zenml.client import Client - from zenml.utils import io_utils +#### Example: Reading from Artifact Store +```python +from zenml.client import Client +from zenml.utils import io_utils - root_path = Client().active_stack.artifact_store.path - artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") - artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) - ``` +root_path = Client().active_stack.artifact_store.path +artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") +artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) +``` -- Using temporary files for serialization: - ```python - import os - import tempfile - from zenml.client import Client - from zenml.io import fileio +#### Temporary File Operations +For serialization with external libraries: +```python +import os +import tempfile +from zenml.client import Client +from zenml.io import fileio - root_path = Client().active_stack.artifact_store.path - artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.json") +root_path = Client().active_stack.artifact_store.path +artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.json") - with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=True) as f: - # Save to temporary file and copy to artifact store - fileio.copy(f.name, artifact_uri) - ``` +with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=True) as f: + # Save to temporary file + # Copy it into artifact store + fileio.copy(f.name, artifact_uri) +``` -This summary captures the essential aspects of setting up and using the Artifact Store in ZenML, including customization options, configuration, and practical code examples. +This summary captures the essential details about Artifact Stores in ZenML, including their purpose, configuration, usage, and examples for clarity. ================================================== @@ -5199,65 +5167,53 @@ This summary captures the essential aspects of setting up and using the Artifact ### Summary: Developing a Custom Artifact Store in ZenML #### Overview -ZenML provides built-in Artifact Store implementations for local and cloud storage (AWS, GCP, Azure). For custom object storage solutions, you can create a custom Artifact Store by extending ZenML. +ZenML provides built-in Artifact Store implementations for local and cloud storage. If you need a different object storage service, you can create a custom Artifact Store by extending ZenML. #### Base Abstraction -The `BaseArtifactStore` class is central to the ZenML stack: +The `BaseArtifactStore` class is central to ZenML's stack architecture. Key components include: -1. **Configuration**: Requires a `path` parameter for the artifact store's root location. +1. **Configuration Parameter**: The `path` parameter specifies the root path of the artifact store. 2. **Supported Schemes**: The `SUPPORTED_SCHEMES` class variable must be defined in subclasses to indicate supported file path schemes (e.g., `{"abfs://", "az://"}` for Azure). 3. **Abstract Methods**: Subclasses must implement the following methods: - `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, `walk`. -#### Implementation Example +#### Example Implementation ```python from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig -from typing import Any, List, Set, Tuple, Type, Union +from typing import Any, List, Tuple, Type, Union, Iterable, Callable, Optional PathType = Union[bytes, str] class BaseArtifactStoreConfig(StackComponentConfig): path: str - SUPPORTED_SCHEMES: Set[str] + SUPPORTED_SCHEMES: ClassVar[Set[str]] class BaseArtifactStore(StackComponent): @abstractmethod def open(self, name: PathType, mode: str = "r") -> Any: ... - @abstractmethod def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... - @abstractmethod def exists(self, path: PathType) -> bool: ... - @abstractmethod def glob(self, pattern: PathType) -> List[PathType]: ... - @abstractmethod def isdir(self, path: PathType) -> bool: ... - @abstractmethod def listdir(self, path: PathType) -> List[PathType]: ... - @abstractmethod def makedirs(self, path: PathType) -> None: ... - @abstractmethod def mkdir(self, path: PathType) -> None: ... - @abstractmethod def remove(self, path: PathType) -> None: ... - @abstractmethod def rename(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... - @abstractmethod def rmtree(self, path: PathType) -> None: ... - @abstractmethod def stat(self, path: PathType) -> Any: ... - @abstractmethod def walk(self, top: PathType, topdown: bool = True, onerror: Optional[Callable[..., None]] = None) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]: ... @@ -5265,29 +5221,24 @@ class BaseArtifactStoreFlavor(Flavor): @property @abstractmethod def name(self) -> Type["BaseArtifactStore"]: ... - @property def type(self) -> StackComponentType: return StackComponentType.ARTIFACT_STORE - @property def config_class(self) -> Type[StackComponentConfig]: return BaseArtifactStoreConfig - @property @abstractmethod def implementation_class(self) -> Type["BaseArtifactStore"]: ... ``` -#### Integration with ZenML -- When an artifact store is instantiated and added to a stack, it creates a filesystem for each pipeline run, accessible via `zenml.io.fileio`. -- Custom Artifact Stores can be implemented by: - 1. Inheriting from `BaseArtifactStore` and implementing required methods. - 2. Inheriting from `BaseArtifactStoreConfig` and defining `SUPPORTED_SCHEMES`. - 3. Inheriting from `BaseArtifactStoreFlavor`. +#### Registering Your Custom Artifact Store +To register your custom Artifact Store, follow these steps: +1. Create a class inheriting from `BaseArtifactStore` and implement the abstract methods. +2. Create a class inheriting from `BaseArtifactStoreConfig` and define `SUPPORTED_SCHEMES`. +3. Inherit from `BaseArtifactStoreFlavor` to combine both classes. -#### Registration -Register the custom flavor using: +Register via CLI: ```shell zenml artifact-store flavor register ``` @@ -5296,13 +5247,17 @@ Example: zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor ``` -#### Important Notes -- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- Custom Artifact Stores must authenticate with their back-end without relying on local environment settings. -- For visualizations, ensure your custom store can access and load artifacts appropriately. +#### Important Considerations +- Ensure ZenML is initialized at the root of your repository for proper resolution. +- After registration, list available flavors with: +```shell +zenml artifact-store flavor list +``` -#### Additional Resources -Refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore) for complete implementation details and examples. +#### Enabling Visualizations +For visualizations to work with your custom Artifact Store, ensure it can authenticate to the backend without relying on local environment settings. Install necessary dependencies in the deployed environment. + +For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore). ================================================== @@ -5310,25 +5265,25 @@ Refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/ ### Feature Stores -Feature stores enable data teams to manage data through both offline and online low-latency stores, ensuring synchronization between the two. They provide a centralized registry for features and feature schemas, which is essential for data scientists who need different access methods for batch and real-time data. Feast addresses the issue of train-serve skew, where training and serving data diverge. +Feature stores enable data teams to manage data through both offline and online low-latency stores, ensuring synchronization between them. They provide a centralized registry for features and their schemas, catering to different access needs for batch and real-time data. Feast addresses the issue of train-serve skew, where training and serving data diverge. ### When to Use It -Feature stores are optional in the ZenML Stack and should be utilized for: +Feature stores are optional components in the ZenML Stack, primarily used for: - Productionalizing new features - Reusing existing features across pipelines and models - Ensuring consistency between training and serving data -- Providing a central registry for features and schemas +- Providing a central registry of features and schemas ### Available Feature Stores -ZenML integrates with various feature stores, primarily through the `feast` integration. Here are the options: +ZenML integrates with various feature stores, including: -| Feature Store | Flavor | Integration | Notes | -|------------------------------|---------|-------------|-------------------------------------------------| -| [FeastFeatureStore](feast.md)| `feast` | `feast` | Connects ZenML with existing Feast | -| [Custom Implementation](custom.md)| _custom_ | | Extend the feature store abstraction | +| Feature Store | Flavor | Integration | Notes | +|----------------------------|---------|-------------|--------------------------------------------| +| [FeastFeatureStore](feast.md) | `feast` | `feast` | Connects ZenML with existing Feast | +| [Custom Implementation](custom.md) | _custom_ | | Allows for custom feature store implementations | To view available feature store flavors, use: @@ -5338,16 +5293,16 @@ zenml feature-store flavor list ### How to Use It -The feature store implementation is based on the Feast integration. For detailed usage, refer to the [Feast documentation](feast.md#how-do-you-use-it). +The feature store implementation is based on the Feast integration. For usage details, refer to the [Feast documentation](feast.md#how-do-you-use-it). ================================================== === File: docs/book/component-guide/feature-stores/feast.md === -### Summary of Feast Feature Store Documentation +### Summary: Managing Data in Feast Feature Stores **Feast Overview** -Feast (Feature Store) is an operational data system designed for managing and serving machine learning features for production models. It supports both low-latency online stores for real-time predictions and offline stores for batch scoring or model training. +Feast (Feature Store) is an operational data system designed for managing and serving machine learning features to production models. It supports both low-latency online stores for real-time predictions and offline stores for batch scoring or model training. **Use Cases** Feast enables: @@ -5356,7 +5311,7 @@ Feast enables: **Deployment** To deploy Feast with ZenML: -1. Ensure you have a Feast feature store. If not, follow the [Feast Documentation](https://docs.feast.dev/how-to-guides/feast-snowflake-gcp-aws/deploy-a-feature-store). +1. Ensure you have a Feast feature store. If not, refer to the [Feast Documentation](https://docs.feast.dev/how-to-guides/feast-snowflake-gcp-aws/deploy-a-feature-store). 2. Install the Feast integration in ZenML: ```shell zenml integration install feast @@ -5368,7 +5323,7 @@ To deploy Feast with ZenML: ``` **Usage** -To retrieve features from a registered feature store, create a step that interfaces with the feature store: +To retrieve features from a registered feature store, create a step that interfaces with it: ```python from datetime import datetime from typing import Any, Dict, List, Union @@ -5378,21 +5333,31 @@ from zenml.client import Client @step def get_historical_features(entity_dict: Union[Dict[str, Any], str], features: List[str], full_feature_names: bool = False) -> pd.DataFrame: + """Fetch historical features from Feast.""" feature_store = Client().active_stack.feature_store if not feature_store: - raise DoesNotExistException("Feast feature store component not available.") - - params.entity_dict["event_timestamp"] = [datetime.fromisoformat(val) for val in entity_dict["event_timestamp"]] + raise DoesNotExistException("Feast feature store component is not available.") + + entity_dict["event_timestamp"] = [datetime.fromisoformat(val) for val in entity_dict["event_timestamp"]] entity_df = pd.DataFrame.from_dict(entity_dict) return feature_store.get_historical_features(entity_df=entity_df, features=features, full_feature_names=full_feature_names) entity_dict = { "driver_id": [1001, 1002, 1003], - "event_timestamp": [datetime(2021, 4, 12, 10, 59, 42).isoformat(), datetime(2021, 4, 12, 8, 12, 10).isoformat(), datetime(2021, 4, 12, 16, 40, 26).isoformat()], + "label_driver_reported_satisfaction": [1, 5, 3], + "event_timestamp": [ + datetime(2021, 4, 12, 10, 59, 42).isoformat(), + datetime(2021, 4, 12, 8, 12, 10).isoformat(), + datetime(2021, 4, 12, 16, 40, 26).isoformat(), + ], } -features = ["driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", "driver_hourly_stats:avg_daily_trips"] +features = [ + "driver_hourly_stats:conv_rate", + "driver_hourly_stats:acc_rate", + "driver_hourly_stats:avg_daily_trips", +] @pipeline def my_pipeline(): @@ -5402,9 +5367,9 @@ def my_pipeline(): **Important Notes** - Online data retrieval is currently unsupported in deployed models. -- ZenML uses Pydantic for serialization, which limits data types (e.g., cannot handle Pandas `DataFrame`s directly). +- ZenML's use of Pydantic limits input types to basic data types; complex types like `DataFrame` or `datetime` require conversion. -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). +For more details on configurable attributes of the Feast feature store, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). ================================================== @@ -5412,11 +5377,16 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr ### Develop a Custom Feature Store -**Overview**: Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between them. They also provide a centralized registry for features and feature schemas for team or organizational use. +**Overview**: Feature stores enable data teams to provide data through both an offline store and an online low-latency store, ensuring synchronization between them. They also serve as a centralized registry for features and feature schemas within a team or organization. + +**Prerequisites**: Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. + +**Current Status**: The base abstraction for feature stores is under development and not yet available for extension. For immediate use, refer to the list of existing feature stores. -**Important Note**: The base abstraction for feature stores is currently under development, and extensions are not available at this time. For immediate use, refer to the list of existing feature stores. +**Important Note**: +- **Base Abstraction in Progress**: Extension of feature stores is currently not possible. -**Recommendation**: Before starting, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. +![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== @@ -5424,15 +5394,14 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr ### Annotators in ZenML -**Overview**: Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They allow users to launch annotation processes, configure datasets, and track labeled tasks. - -**Importance of Data Annotation**: Data annotation is crucial in MLOps but often overlooked. ZenML aims to support iterative annotation workflows, integrating labelers into the ML process. +**Overview**: +Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They enable users to launch annotation processes, configure datasets, and track labeled tasks. Data annotation is essential in MLOps, and ZenML aims to support iterative annotation workflows that integrate labeling into the ML lifecycle. -**When to Annotate**: -1. **At the Start**: Label initial data to bootstrap models, iteratively improving labels and model predictions. -2. **As New Data Arrives**: Regularly check and update labeling processes to incorporate new data. -3. **Inference Samples**: Store and label data from model predictions for comparison and retraining. -4. **Ad Hoc Interventions**: Identify and correct bad labels or address class imbalances through targeted annotation. +**Key Use Cases**: +1. **Initial Labeling**: Start labeling data to bootstrap models, iterating between labeling and model training to refine definitions and standards. +2. **Ongoing Data**: Regularly check and label new incoming data, while considering automation for data drift detection. +3. **Inference Samples**: Store and label data from model predictions to compare with actual labels, aiding in model retraining. +4. **Ad Hoc Annotation**: Identify and annotate challenging examples or correct bad labels, especially in cases of class imbalance. **Core Features**: - Seamless integration of labels in training steps. @@ -5441,24 +5410,28 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr - Generation of UI config files for tools like Label Studio. **Available Annotators**: -- **ArgillaAnnotator**: Connects ZenML with Argilla. -- **LabelStudioAnnotator**: Connects ZenML with Label Studio. -- **PigeonAnnotator**: Limited to Jupyter notebooks for image/text classification. -- **ProdigyAnnotator**: Connects ZenML with Prodigy. -- **Custom Implementation**: Allows users to create their own annotator. +ZenML supports various annotators through integrations: +| Annotator | Flavor | Integration | Notes | +|-------------------------|---------------|------------------|-----------------------------------------| +| ArgillaAnnotator | `argilla` | `argilla` | Connects ZenML with Argilla | +| LabelStudioAnnotator | `label_studio`| `label_studio` | Connects ZenML with Label Studio | +| PigeonAnnotator | `pigeon` | `pigeon` | Limited to Jupyter notebooks for image/text classification | +| ProdigyAnnotator | `prodigy` | `prodigy` | Connects ZenML with Prodigy | +| Custom Implementation | _custom_ | | Extend the annotator abstraction | **Command to List Annotator Flavors**: ```shell zenml annotator flavor list ``` -**Usage**: The annotator implementation is based on the Label Studio integration. For detailed usage, refer to the Label Studio documentation. +**Usage**: +The annotator implementation is primarily based on the Label Studio integration. For detailed usage, refer to the [Label Studio documentation](label-studio.md#how-do-you-use-it). Note that Pigeon has limited functionality. -**Naming Conventions**: -- ZenML uses "Dataset" for grouped annotations (Label Studio uses "Project"). -- Individual annotation units are called "tasks" in ZenML, aligning with Label Studio terminology. +**Terminology**: +- ZenML uses "Dataset" to refer to a grouping of annotations/tasks, aligning with most tools, while Label Studio uses "Project." +- The unit of "an annotation + source data" is termed "tasks" in ZenML, consistent with Label Studio. -This concise summary captures the essential information about annotators in ZenML, their importance, usage, and available tools while maintaining clarity and technical accuracy. +This concise overview captures the essential information about annotators in ZenML, ensuring clarity on their purpose, use cases, features, and available integrations. ================================================== @@ -5466,47 +5439,44 @@ This concise summary captures the essential information about annotators in ZenM ### Prodigy Annotation Tool Overview -**Prodigy** is a paid annotation tool used for creating training and evaluation data for machine learning models. It aids in data inspection, cleaning, error analysis, and developing rule-based systems. +**Prodigy** is a paid annotation tool designed for creating training and evaluation data for machine learning models. It aids in data inspection, cleaning, error analysis, and developing rule-based systems. -#### Key Features: -- **Custom Workflows**: Prodigy supports pre-built workflows and allows custom scripts for data loading, saving, and UI customization. -- **Fast Annotation**: The web application is designed for efficient data annotation. - -#### Usage Context: -Consider using Prodigy when labeling data for your ML workflow, integrating it as an optional component in your ZenML stack. +#### Key Features +- **Integration with ZenML**: Prodigy can be integrated into the ZenML stack for ML workflows. +- **Custom Workflows**: Users can create custom scripts to load/save data, modify annotation questions, and customize the front-end using HTML/JavaScript. +- **Fast Annotation**: The web application is optimized for efficient data annotation. -### Deployment Steps: -1. **Install Prodigy**: Requires a license and the `urllib3<2` dependency. Refer to the [Prodigy installation guide](https://prodi.gy/docs/install). +#### Deployment Steps +1. **Install Prodigy**: Requires a license. Follow the [Prodigy installation guide](https://prodi.gy/docs/install). Ensure `urllib3<2` is installed. 2. **Register Prodigy with ZenML**: ```shell zenml integration export-requirements --output-file prodigy-requirements.txt prodigy zenml annotator register prodigy --flavor prodigy ``` - Optionally, specify a custom config file: + Optionally, specify a custom config path: ```shell - zenml annotator register prodigy --flavor prodigy --custom_config_path="" + # --custom_config_path="" ``` -3. **Set Up Stack**: + +3. **Update ZenML Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -an prodigy zenml stack set annotation ``` - Verify setup: + +#### Usage +- **Accessing Datasets**: ```shell zenml annotator dataset list ``` - -### Annotation Process: -Prodigy does not require pre-starting the annotator. Use it as per the [Prodigy documentation](https://prodi.gy) and access labeled data via ZenML's API. - -- To annotate a dataset: +- **Annotating a Dataset**: ```shell zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" ``` -### Importing Annotations: -Within a ZenML step, you can import annotations: +#### Importing Annotations +To import annotations into a ZenML step: ```python from typing import List, Dict, Any from zenml import step @@ -5519,52 +5489,52 @@ def import_annotations() -> List[Dict[str, Any]]: return annotations ``` -### Prodigy Annotator Component: -The Prodigy annotator component extends `BaseAnnotator`, implementing core methods for dataset registration and annotation export. It includes additional methods specific to Prodigy for enhanced functionality. +#### Prodigy Annotator Component +The Prodigy annotator component extends the `BaseAnnotator` class, requiring methods for dataset registration and annotation export. It supports core Prodigy functionalities, including dataset registration and annotation export for use in ZenML steps. -For further details, consult the Prodigy documentation and ZenML integration guides. +For further details, refer to the [Prodigy documentation](https://prodi.gy/docs). ================================================== === File: docs/book/component-guide/annotators/label-studio.md === -### Label Studio Integration with ZenML - -**Overview**: Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types including computer vision, audio, text/NLP, time series, and multi-modal tasks. +### Label Studio Overview +Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types: +- **Computer Vision**: image classification, object detection, semantic segmentation +- **Audio & Speech**: classification, speaker diarization, emotion recognition, transcription +- **Text/NLP**: classification, NER, question answering, sentiment analysis +- **Time Series**: classification, segmentation, event recognition +- **Multi-Modal/Domain**: dialogue processing, OCR, time series with reference -**Use Cases**: It is beneficial for labeling data in ML workflows, particularly when integrated into a ZenML stack. Currently, it supports cloud artifact stores like AWS S3, GCP/GCS, and Azure Blob Storage. Local stacks are not supported for the annotation component. +### Use Case +Label Studio can be integrated into ML workflows for data labeling. It is compatible with cloud artifact stores like AWS S3, GCP/GCS, and Azure Blob Storage. Local stacks are not supported for the annotation component. ### Deployment Steps - -1. **Install the Integration**: +1. **Install Label Studio Integration**: ```shell zenml integration install label_studio ``` - -2. **Set Up Label Studio**: - - Clone the repository and start the local instance: + +2. **Obtain API Key**: + Clone the repository and start Label Studio: ```shell git clone https://github.com/HumanSignal/label-studio.git cd label-studio docker-compose up -d ``` - - Access the web interface at [http://localhost:8080/](http://localhost:8080/) and obtain your API key from the user account settings. + Access the web interface at [http://localhost:8080/](http://localhost:8080/) to get your API key. 3. **Register API Key**: ```shell zenml secret create label_studio_secrets --api_key="" ``` -4. **Register the Annotator**: +4. **Register Annotator**: ```shell zenml annotator register label_studio --flavor label_studio --authentication_secret="label_studio_secrets" --port=8080 ``` - - For deployed instances, include the instance URL: - ```shell - zenml annotator register label_studio --flavor label_studio --authentication_secret="" --instance_url="" --port=80 - ``` -5. **Create and Set the Stack**: +5. **Update Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -a @@ -5573,70 +5543,67 @@ For further details, consult the Prodigy documentation and ZenML integration gui ``` ### Usage - -- Use the CLI commands to interact with datasets: - - List datasets: +After setup, use the CLI commands for dataset management: +- List datasets: ```shell zenml annotator dataset list ``` - - Annotate a dataset: +- Annotate a dataset: ```shell zenml annotator dataset annotate ``` ### Key Components - -- **Label Studio Annotator**: Inherits from `BaseAnnotator`, requiring core methods for dataset registration and annotation export. It manages the server process for the web interface. - +- **Label Studio Annotator**: Inherits from `BaseAnnotator`, with methods for dataset registration, annotation export, and starting the annotator daemon. + - **Standard Steps**: - - `LabelStudioDatasetRegistrationConfig`: Config for registering datasets. - - `LabelStudioDatasetSyncConfig`: Config for syncing datasets. + - `LabelStudioDatasetRegistrationConfig`: Config for dataset registration. + - `LabelStudioDatasetSyncConfig`: Config for syncing new data. - `get_or_create_dataset`: Registers or retrieves a dataset. - - `get_labeled_data`: Fetches labeled data in Label Studio format. - - `sync_new_data_to_label_studio`: Ensures data synchronization with the cloud artifact store. + - `get_labeled_data`: Retrieves labeled data in Label Studio format. + - `sync_new_data_to_label_studio`: Ensures data and annotations are synced. -- **Helper Functions**: ZenML provides functions to create 'label config' strings for object detection, image classification, and OCR. +### Helper Functions +Label Studio requires 'label config' for dataset registration, which can be generated using ZenML's helper functions for object detection, image classification, and OCR. -For further details, refer to the [Label Studio documentation](https://labelstud.io/guide/tasks.html) and the [ZenML GitHub repository](https://github.com/zenml-io/zenml). +For more details, refer to the [Label Studio documentation](https://labelstud.io/guide/tasks.html) and the [ZenML GitHub repository](https://github.com/zenml-io/zenml). ================================================== === File: docs/book/component-guide/annotators/argilla.md === -### Argilla Documentation Summary - -**Overview**: -Argilla is a collaborative tool designed for AI engineers and domain experts to create high-quality datasets. It enhances data curation through human and machine feedback, supporting the MLOps cycle from data labeling to model monitoring. +### Argilla Overview +Argilla is a collaboration tool designed for AI engineers and domain experts to create high-quality datasets for machine learning projects. It enhances data curation through human and machine feedback, supporting the entire MLOps cycle from data labeling to model monitoring. Its unique focus on human-in-the-loop approaches sets it apart from competitors. -**Use Cases**: -Argilla is beneficial for labeling textual data in ML workflows. It can be integrated into a ZenML stack, supporting both local (Docker-backed) and deployed instances, including deployment as a Hugging Face Space. +### Use Cases +Argilla is ideal for labeling textual data in ML workflows. It can be integrated into a ZenML stack, supporting annotation at various stages. -**Deployment**: -To deploy Argilla, install the integration: +### Deployment +To deploy Argilla, install the ZenML integration: ```shell zenml integration install argilla ``` -You can register the annotator with an API key directly or as a secret for security. For secret registration: +You can register your API key directly or as a secret for security. For secret registration: ```shell zenml secret create argilla_secrets --api_key="" ``` -Then register the annotator: +Then, register the annotator: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --port=6900 ``` -For a deployed instance, specify the URL without a trailing slash. If using a private Hugging Face Space, include the headers parameter with your token: +For a deployed instance, specify the instance URL without a trailing `/` and include headers for private Hugging Face Spaces: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --headers='{"Authorization": "Bearer {[your_hugging_face_token]}"}' ``` -Add components to a stack: +Add components to a stack and set it as active: ```shell zenml stack copy default annotation @@ -5650,18 +5617,17 @@ Verify with: zenml annotator dataset list ``` -**Usage**: -Access data and annotations via the CLI. For dataset annotation, use: +### Usage +Access data and annotations using the CLI: -```shell -zenml annotator dataset annotate -``` +- List datasets: `zenml annotator dataset list` +- Annotate a dataset: `zenml annotator dataset annotate ` -**Argilla Annotator Component**: -The Argilla annotator extends the `BaseAnnotator` class, implementing core methods for dataset registration and annotation export. It requires a running server for the web interface. +### Argilla Annotator Component +The Argilla annotator extends the `BaseAnnotator` class, implementing core methods for dataset registration and state management. It supports dataset registration, annotation export, and requires a running server for the web interface. -**Argilla Annotator SDK**: -To use the SDK in Python: +### Argilla Annotator SDK +For SDK usage in Python: ```python from zenml.client import Client @@ -5679,46 +5645,42 @@ dataset = annotator.get_dataset("dataset_name") annotations = annotator.get_labeled_data(dataset_name="dataset_name") ``` -For further details, refer to the [Argilla documentation](https://docs.argilla.io/en/latest/). +For more details, refer to the [Argilla documentation](https://docs.argilla.io/en/latest/). ================================================== === File: docs/book/component-guide/annotators/pigeon.md === -### Pigeon Annotation Tool Overview +### Pigeon Annotation Tool -**Pigeon** is an open-source annotation tool for labeling data within Jupyter notebooks, supporting: +**Overview**: +Pigeon is a lightweight, open-source annotation tool for labeling data directly within Jupyter notebooks. It supports: - Text Classification - Image Classification - Text Captioning -#### Use Cases -Pigeon is ideal for: -- Small to medium-sized datasets +**Use Cases**: +Ideal for small to medium-sized datasets in ML workflows, Pigeon is useful for: - Quick labeling tasks -- Iterative labeling during ML project exploration -- Collaborative labeling in Jupyter environments +- Iterative labeling during exploratory phases +- Collaborative labeling in Jupyter notebooks -#### Deployment Steps -1. **Install Pigeon Integration:** +**Deployment Steps**: +1. Install the ZenML Pigeon integration: ```shell zenml integration install pigeon ``` - -2. **Register the Annotator:** +2. Register the Pigeon annotator, specifying the output directory: ```shell zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" ``` - -3. **Update Your Stack:** +3. Update your stack to include the Pigeon annotator: ```shell zenml stack update --annotator pigeon ``` -#### Usage -Access the Pigeon annotator in your Jupyter notebook: - -- **Text Classification Example:** +**Usage**: +- **Text Classification**: ```python from zenml.client import Client @@ -5729,7 +5691,7 @@ Access the Pigeon annotator in your Jupyter notebook: ) ``` -- **Image Classification Example:** +- **Image Classification**: ```python from zenml.client import Client from IPython.display import display, Image @@ -5742,16 +5704,16 @@ Access the Pigeon annotator in your Jupyter notebook: ) ``` -#### Dataset Management Commands +**Annotation Management**: - List datasets: `zenml annotator dataset list` - Delete a dataset: `zenml annotator dataset delete ` -- Get dataset statistics: `zenml annotator dataset stats ` +- Get dataset stats: `zenml annotator dataset stats ` -#### Output -Annotations are saved as JSON files in the specified output directory, with filenames representing dataset names. +**Output**: +Annotations are saved as JSON files in the specified output directory, with filenames corresponding to dataset names. -### Acknowledgements -Pigeon was created by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License and has been updated for compatibility with recent `ipywidgets` versions. +**Acknowledgements**: +Pigeon was created by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License. ================================================== @@ -5759,42 +5721,44 @@ Pigeon was created by [Anastasis Germanidis](https://github.com/agermanidis) and ### Develop a Custom Annotator -**Overview**: Custom annotators in ZenML allow for data annotation within your stack and pipelines. Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational concepts. +**Overview**: Custom annotators are stack components in ZenML that facilitate data annotation within your pipelines. Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. -**Functionality**: Annotators can be launched via CLI commands to configure datasets and retrieve statistics on labeled tasks. +**Functionality**: Annotators allow you to launch annotation tasks via CLI, configure datasets, and retrieve statistics on labeled tasks. -**Current Status**: The base abstraction for annotators is under development, and extension is currently not possible. Users should refer to the list of available feature stores for immediate use. +**Current Status**: The base abstraction for annotators is under development, and extension is currently not supported. Users should refer to the list of existing feature stores for available annotators. -**Note**: Keep an eye on updates regarding the base abstraction for future enhancements. +**Note**: Keep an eye out for updates on the base abstraction. ================================================== === File: docs/book/component-guide/model-deployers/vllm.md === -### vLLM Deployment Documentation Summary +### vLLM: Deploying Your LLM Locally -**vLLM Overview** -[vLLM](https://docs.vllm.ai/en/latest/) is a library designed for efficient LLM inference and serving, featuring: +**Overview**: vLLM is a library designed for efficient LLM inference and serving, offering features like continuous batching, quantization (GPTQ, AWQ, INT4, INT8, FP8), PagedAttention, speculative decoding, and chunked pre-fill. -- High throughput for Large Language Models (LLMs) with an OpenAI-compatible API server. -- Continuous request batching. -- Support for quantization methods: GPTQ, AWQ, INT4, INT8, FP8. -- Advanced features like PagedAttention, Speculative decoding, and Chunked pre-fill. +#### When to Use vLLM +- Deploy large language models with high throughput. +- Create an OpenAI-compatible API server. -**Deployment Steps** -1. **Install vLLM ZenML Integration**: +#### Deployment Steps +1. **Install vLLM Integration**: ```bash zenml integration install vllm -y ``` -2. **Register the vLLM Model Deployer**: +2. **Register vLLM Model Deployer**: ```bash zenml model-deployer register vllm_deployer --flavor=vllm ``` - This command sets up a local vLLM deployment server as a background daemon. -**Usage Example** -To deploy an LLM, utilize the `vllm_model_deployer_step` in a ZenML pipeline: +This sets up a local vLLM deployment server as a background daemon. + +#### Usage Example +To see a deployment pipeline in action, refer to the [deployment pipeline example](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25). + +**Deploy an LLM**: +Use the `vllm_model_deployer_step` to create a `VLLMDeploymentService`. Here’s a concise example: ```python from zenml import pipeline @@ -5804,100 +5768,98 @@ from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentServi @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "GPT2"]: - service = vllm_model_deployer_step(model=model, timeout=timeout) - return service + return vllm_model_deployer_step(model=model, timeout=timeout) ``` -**Configuration Options** -Within `VLLMDeploymentService`, you can configure: +Refer to this [example](https://github.com/zenml-io/zenml-projects/tree/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer) for running a GPT-2 model with vLLM. -- `model`: Hugging Face model name or path. -- `tokenizer`: Hugging Face tokenizer name or path (defaults to model name if unspecified). -- `served_model_name`: API model name (defaults to model name). +#### Configuration Options +Within `VLLMDeploymentService`, you can configure: +- `model`: Hugging Face model name/path. +- `tokenizer`: Hugging Face tokenizer name/path (defaults to model if unspecified). +- `served_model_name`: API model name (defaults to `model`). - `trust_remote_code`: Trust remote code from Hugging Face. -- `tokenizer_mode`: Options are ['auto', 'slow', 'mistral']. -- `dtype`: Data type for weights and activations (options: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']). -- `revision`: Specific model version (branch name, tag, or commit id; defaults to the latest). - -For a practical example, refer to the [deployment pipeline](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25) and a [GPT-2 model example](https://github.com/zenml-io/zenml-projects/tree/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer). +- `tokenizer_mode`: Options: ['auto', 'slow', 'mistral']. +- `dtype`: Data type for weights/activations (options: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']). +- `revision`: Specific model version (branch/tag/commit ID; defaults to latest). ================================================== === File: docs/book/component-guide/model-deployers/huggingface.md === -### Summary: Deploying Models to Hugging Face Inference Endpoints +### Summary of Hugging Face Inference Endpoints Documentation -**Overview**: Hugging Face Inference Endpoints offer a secure and managed solution for deploying `transformers`, `sentence-transformers`, and `diffusers` models on autoscaling infrastructure without needing to manage containers or GPUs. +**Overview:** +Hugging Face Inference Endpoints enable secure and scalable deployment of `transformers`, `sentence-transformers`, and `diffusers` models on managed infrastructure, eliminating the need for container and GPU management. -**When to Use**: +**When to Use:** - Deploy models on dedicated, secure infrastructure. - Prefer a fully-managed production solution for inference. - Aim to create production-ready APIs with minimal MLOps involvement. -- Seek cost-effectiveness, paying only for raw compute resources. -- Require enterprise security with offline endpoints connected to VPCs. +- Seek cost-effective solutions, paying only for raw compute resources. +- Require enterprise security with offline endpoints connected to Virtual Private Clouds (VPCs). -**Installation**: -To deploy models, install the Hugging Face ZenML integration: -```bash -zenml integration install huggingface -y -``` +**Deployment Steps:** +1. **Install Hugging Face ZenML Integration:** + ```bash + zenml integration install huggingface -y + ``` -**Registering the Model Deployer**: -```bash -zenml model-deployer register --flavor=huggingface --token= --namespace= -``` -- `token`: Hugging Face authentication token. -- `namespace`: Username or organization name for inference endpoint creation. +2. **Register the Model Deployer:** + ```bash + zenml model-deployer register --flavor=huggingface --token= --namespace= + ``` + - `token`: Hugging Face authentication token. + - `namespace`: Username or organization name for inference endpoints. -**Updating the Stack**: -```bash -zenml stack update --model-deployer= -``` +3. **Update Stack:** + ```bash + zenml stack update --model-deployer= + ``` -**Using the Model Deployer**: -1. **Deploying a Model**: - Use the `huggingface_model_deployer_step` in your pipeline: - ```python - from zenml import pipeline - from zenml.integrations.huggingface.services import HuggingFaceServiceConfig - from zenml.integrations.huggingface.steps import huggingface_model_deployer_step +**Using the Model Deployer:** +- Deploy models using the pre-built `huggingface_model_deployer_step`. +- Run batch inference with `HuggingFaceDeploymentService`. - @pipeline(enable_cache=True) - def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): - service_config = HuggingFaceServiceConfig(model_name=model_name) - huggingface_model_deployer_step(service_config=service_config, timeout=timeout) - ``` +**Example Deployment Pipeline:** +```python +from zenml import pipeline +from zenml.config import DockerSettings +from zenml.integrations.huggingface.services import HuggingFaceServiceConfig +from zenml.integrations.huggingface.steps import huggingface_model_deployer_step - **Configurable Attributes**: - - `model_name`, `endpoint_name`, `repository`, `framework`, `accelerator`, `instance_size`, `instance_type`, `region`, `vendor`, `token`, `account_id`, `min_replica`, `max_replica`, `revision`, `task`, `custom_image`, `namespace`, `endpoint_type`. +@pipeline(enable_cache=True) +def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): + service_config = HuggingFaceServiceConfig(model_name=model_name) + huggingface_model_deployer_step(service_config=service_config, timeout=timeout) +``` +**Configurable Attributes in `HuggingFaceServiceConfig`:** +- `model_name`, `endpoint_name`, `repository`, `framework`, `accelerator`, `instance_size`, `instance_type`, `region`, `vendor`, `token`, `account_id`, `min_replica`, `max_replica`, `revision`, `task`, `custom_image`, `namespace`, `endpoint_type`. -2. **Running Inference**: - Example of running inference on a deployed endpoint: - ```python - from zenml import step, pipeline - from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer - from zenml.integrations.huggingface.services import HuggingFaceDeploymentService +**Running Inference Example:** +```python +from zenml import step, pipeline +from zenml.integrations.huggingface.services import HuggingFaceDeploymentService - @step - def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> HuggingFaceDeploymentService: - model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() - existing_services = model_deployer.find_model_server(pipeline_name, pipeline_step_name, model_name, running) - if not existing_services: - raise RuntimeError("No running inference endpoint found.") - return existing_services[0] +@step +def prediction_service_loader(pipeline_name: str, pipeline_step_name: str) -> HuggingFaceDeploymentService: + model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() + existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) + if not existing_services: + raise RuntimeError("No running service found.") + return existing_services[0] - @step - def predictor(service: HuggingFaceDeploymentService, data: str) -> str: - return service.predict(data) +@step +def predictor(service: HuggingFaceDeploymentService, data: str) -> str: + return service.predict(data) - @pipeline - def huggingface_deployment_inference_pipeline(pipeline_name: str): - inference_data = ... - model_service = prediction_service_loader(pipeline_name) - predictions = predictor(model_service, inference_data) - ``` +@pipeline +def huggingface_deployment_inference_pipeline(pipeline_name: str): + model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name) + predictions = predictor(model_deployment_service, inference_data) +``` -For detailed configuration attributes and further information, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). ================================================== @@ -5906,40 +5868,52 @@ For detailed configuration attributes and further information, refer to the [SDK ### Summary: Deploying Models to Databricks Inference Endpoints **Overview:** -Databricks Model Serving provides a unified interface for deploying, governing, and querying AI models as REST APIs, with managed autoscaling infrastructure. It allows deployment without handling containers or GPUs. The Databricks Model Deployer can be used interchangeably with MLflow without altering pipeline code. +Databricks Model Serving provides a unified interface for deploying, governing, and querying AI models as REST APIs. It offers managed infrastructure, allowing users to deploy models without managing containers or GPUs. -**When to Use:** -- Already using Databricks for data and ML workloads. -- Need to deploy AI models without managing infrastructure. -- Require enterprise security for offline endpoints connected to VPCs. -- Aim to create production-ready APIs with minimal MLOps involvement. +**When to Use Databricks Model Deployer:** +- You are utilizing Databricks for data and ML workloads. +- You prefer not to manage containers and GPUs. +- You need dedicated, autoscaling infrastructure. +- Enterprise security is a priority, requiring secure offline endpoints. +- You aim to create production-ready APIs with minimal MLOps involvement. -**Deployment Steps:** -1. Install the Databricks ZenML integration: - ```bash - zenml integration install databricks -y - ``` -2. Register the Databricks model deployer: - ```bash - zenml model-deployer register --flavor=databricks --host= --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} - ``` - *Note: Create a Databricks service account for permissions and generate `client_id` and `client_secret` for authentication.* +**Installation:** +To deploy models, install the Databricks ZenML integration: -3. Update your ZenML stack: - ```bash - zenml stack update --model-deployer= - ``` +```bash +zenml integration install databricks -y +``` + +**Registering the Model Deployer:** +Register the Databricks model deployer with ZenML: + +```bash +zenml model-deployer register --flavor=databricks --host= --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} +``` + +**Service Account Recommendation:** +Create a Databricks service account with necessary permissions for job creation and execution. Generate `client_id` and `client_secret` for authentication. + +**Updating Stack:** +To use the model deployer in your stack: + +```bash +zenml stack update --model-deployer= +``` **Configuration Options:** -- `model_name`: Name for the model in the Databricks Model Registry. -- `model_version`: Version identifier for the model. -- `workload_size`: Can be `Small`, `Medium`, or `Large`. -- `scale_to_zero_enabled`: Boolean to enable/disable scale to zero. +Within `DatabricksServiceConfig`, configure: +- `model_name`: Name of the model in the Databricks Model Registry. +- `model_version`: Version of the model. +- `workload_size`: Size of the workload (`Small`, `Medium`, `Large`). +- `scale_to_zero_enabled`: Enable/disable scale to zero feature. - `env_vars`: Environment variables for the model serving container. -- `workload_type`: Options include `CPU`, `GPU_LARGE`, `GPU_MEDIUM`, `GPU_SMALL`, or `MULTIGPU_MEDIUM`. -- `endpoint_secret_name`: Secret name for endpoint security. +- `workload_type`: Type of workload (`CPU`, `GPU_LARGE`, etc.). +- `endpoint_secret_name`: Secret name for securing the endpoint. + +**Running Inference:** +Example code to run inference on a provisioned endpoint: -**Inference Example:** ```python from zenml import step, pipeline from zenml.integrations.databricks.model_deployers import DatabricksModelDeployer @@ -5964,38 +5938,34 @@ def databricks_deployment_inference_pipeline(pipeline_name: str, pipeline_step_n predictions = predictor(model_deployment_service, inference_data) ``` -For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers) and Databricks endpoint [code](https://github.com/databricks/databricks_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/databricks_hub/hf_api.py#L6957). +For more details on configuration and usage, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). ================================================== === File: docs/book/component-guide/model-deployers/model-deployers.md === -# Model Deployers - -Model deployment involves making machine learning models available for predictions on real-world data. There are two primary types of predictions: batch predictions for large datasets and real-time predictions for individual data points. Model deployers serve models either in real-time or batch mode, typically through a managed web service API (HTTP or GRPC). - -## Use Cases -Model deployers are optional components in the ZenML stack, used to deploy models in development or production environments (e.g., Kubernetes or cloud). They enable continuous training and deployment of models. +### Model Deployers Overview -## Architecture -Model deployers integrate into the ZenML stack, facilitating deployment across various environments. +**Model Deployment** is the process of making machine learning models available for predictions on real-world data. There are two primary types of predictions: +- **Batch Prediction**: Generates predictions for large datasets at once. +- **Real-Time Prediction**: Generates predictions for individual data points. -### Flavors of Model Deployers -ZenML includes several model deployers, each suited for different environments: +**Model Deployers** are components in the ZenML stack responsible for serving models in real-time or batch modes. They facilitate online serving via managed web services with API endpoints (HTTP or GRPC) and enable batch inference for large data sets, typically storing predictions in files or databases. -| Model Deployer | Flavor | Integration | Notes | -|----------------|-----------|-------------------|-----------------------------------------| -| MLflow | `mlflow` | `mlflow` | Deploys ML Model locally | -| BentoML | `bentoml` | `bentoml` | Local or production-grade deployment | -| Seldon Core | `seldon` | `seldon Core` | Kubernetes-based production deployment | -| Hugging Face | `huggingface` | `huggingface` | Deploys on Hugging Face Inference Endpoints | -| Databricks | `databricks` | `databricks` | Deploys to Databricks Inference Endpoints | -| vLLM | `vllm` | `vllm` | Deploys LLMs locally | -| Custom | _custom_ | | User-defined implementation | +### Use Cases +Model deployers are optional in the ZenML stack and can be used for deploying models in local or production environments (Kubernetes or cloud). They are primarily utilized for real-time inference, allowing the construction of pipelines for continuous training and deployment. -### Configuration Example -Model deployers require specific configurations: +### Model Deployer Flavors +ZenML supports various model deployers: +- **MLflow**: Local deployment. +- **BentoML**: Local or production-grade deployment. +- **Seldon Core**: Kubernetes-based production deployment. +- **Hugging Face**: Deployment on Hugging Face Inference Endpoints. +- **Databricks**: Deployment to Databricks Inference Endpoints. +- **vLLM**: Local deployment of LLMs. +- **Custom Implementation**: User-defined deployment solutions. +**Configuration Example**: ```shell # Configure MLflow model deployer zenml model-deployer register mlflow --flavor=mlflow @@ -6003,24 +5973,23 @@ zenml model-deployer register mlflow --flavor=mlflow # Configure Seldon Core model deployer zenml model-deployer register seldon --flavor=seldon \ --kubernetes_context=zenml-eks --kubernetes_namespace=zenml-workloads \ ---base_url=http://example.com +--base_url=http://example-url.com ``` ### Role in ZenML Stack -- **Seamless Deployment**: Deploys models to various environments, managing configurations like hostnames and credentials. -- **Lifecycle Management**: Manages model server lifecycles (start, stop, delete, update). +- **Seamless Deployment**: Facilitates model deployment across various environments, managing configuration attributes for interaction with serving tools. +- **Lifecycle Management**: Offers methods for managing model servers, including starting, stopping, and deleting servers, as well as updating models. -Core methods for interaction include: +**Core Methods**: - `deploy_model`: Deploys a model and returns a Service object. - `find_model_server`: Lists deployed model servers. - `stop_model_server`, `start_model_server`, `delete_model_server`: Manage server states. -The Service object contains deployment configurations and operational status. +**Service Object**: Represents a deployed model server, containing `config` (deployment attributes) and `status` (operational status). -### Interaction with Deployed Models -After deployment, interact with model deployers via CLI: - -```bash +### Interacting with Model Deployer +After deployment, model deployers can be managed via CLI: +```shell $ zenml model-deployer models list $ zenml model-deployer models describe $ zenml model-deployer models get-url @@ -6028,7 +5997,6 @@ $ zenml model-deployer models delete ``` In Python, retrieve the prediction URL: - ```python from zenml.client import Client @@ -6037,7 +6005,8 @@ deployer_step = pipeline_run.steps[""] deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value ``` -ZenML integrations also provide standard pipeline steps for continuous model deployment, ensuring configurations are saved in the Artifact Store for future use. +### Continuous Deployment Workflow +ZenML integrations provide standard pipeline steps for continuous model deployment, ensuring that model configurations are saved in the Artifact Store for future use. ================================================== @@ -6046,29 +6015,31 @@ ZenML integrations also provide standard pipeline steps for continuous model dep ### Summary of Deploying Models Locally with BentoML **BentoML Overview** -BentoML is an open-source framework for serving machine learning models, allowing deployment locally, in the cloud, or on Kubernetes. The BentoML Model Deployer facilitates the deployment and management of BentoML models on a local HTTP server. +BentoML is an open-source framework for serving machine learning models, enabling deployment in local, cloud, or Kubernetes environments. The BentoML Model Deployer allows for managing BentoML models on a local HTTP server. -**Deployment Paths** -- **Local HTTP Server**: For development and production. -- **Containerized Service**: Requires Docker and can be deployed to remote environments. -- **Deprecated Tool**: `bentoctl` is deprecated and may not work with the latest versions of BentoML. +**Deployment Options** +- **Local Development**: Use the Model Deployer for easy local deployment. +- **Production Use**: Transition to production-ready solutions with tools like Yatai or `bentoctl`, though `bentoctl` is deprecated. **When to Use** - Standardize model deployment within an organization. -- Simplify deployment while preparing for production readiness. +- Simplify initial deployment while preparing for production readiness. -**Getting Started** -1. **Install BentoML Integration**: +**Getting Started with Deployment** +1. **Install Required Packages**: ```bash zenml integration install bentoml -y ``` + 2. **Register Model Deployer**: ```bash zenml model-deployer register bentoml_deployer --flavor=bentoml ``` -**Using BentoML** -1. **Create a BentoML Service**: Define how the model will be served. +3. **Run Local HTTP Server**: The integration provisions a local HTTP server to serve models. + +**Using the Model Deployer** +1. **Create a BentoML Service**: Define how your model will be served. ```python import bentoml from bentoml.validators import DType, Shape @@ -6088,201 +6059,160 @@ BentoML is an open-source framework for serving machine learning models, allowin return to_numpy(output_tensor) ``` -2. **Build Your Own Bento**: Customize the bento build process. +2. **Build Your Own Bento**: Use the `bento_builder_step` or create a custom builder. ```python - context = get_step_context() - labels = {"model_uri": model.uri, "bento_uri": os.path.join(context.get_output_artifact_uri(), DEFAULT_BENTO_FILENAME)} - model = load_artifact_from_response(model) - bentoml.pytorch.save_model(model_name, model, labels=labels) - bento = bentos.build(service=service, models=[model_name], version=version, labels=labels) + from zenml import step + + @step + def my_bento_builder(model) -> bento.Bento: + ... + bentoml.pytorch.save_model(model_name, model) + bento = bentos.build(service=service, models=[model_name], ...) + return bento ``` -3. **Bento Builder Step**: Use ZenML's built-in step to build the bento bundle. +3. **Bento Builder Step**: Integrate the built-in step within a ZenML pipeline. ```python - from zenml import pipeline, step + from zenml import pipeline from zenml.integrations.bentoml.steps import bento_builder_step @pipeline def bento_builder_pipeline(): - bento = bento_builder_step(model=model, model_name="pytorch_mnist", service="service.py:CLASS_NAME") + bento = bento_builder_step(model=model, ...) ``` -4. **Deploying the Bento**: - - **Local Deployment**: - ```python - from zenml.integrations.bentoml.steps import bentoml_model_deployer_step - - @pipeline - def bento_deployer_pipeline(): - deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) - ``` - - **Containerized Deployment**: - ```python - @pipeline - def bento_deployer_pipeline(): - deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", deployment_type="container", image="my-custom-image") - ``` - -5. **Predicting with Deployed Model**: +4. **BentoML Deployer Step**: Deploy the bento bundle locally or as a container. ```python - @step - def predictor(inference_data: Dict[str, List], service: BentoMLDeploymentService) -> None: - service.start(timeout=10) - for img, data in inference_data.items(): - prediction = service.predict("predict_ndarray", np.array(data)) + from zenml.integrations.bentoml.steps import bentoml_model_deployer_step + + @pipeline + def bento_deployer_pipeline(): + deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) ``` -**From Local to Cloud** -`bentoctl` (deprecated) was used for deploying models to cloud services like AWS Lambda, SageMaker, Google Cloud, etc. For more details, refer to the [BentoML documentation](https://docs.bentoml.org). +**Predicting with Deployed Model** +Use the BentoML client to send requests to the deployed model: +```python +@step +def predictor(inference_data: Dict[str, List], service: BentoMLDeploymentService) -> None: + service.start(timeout=10) + for img, data in inference_data.items(): + prediction = service.predict("predict_ndarray", np.array(data)) +``` + +**From Local to Cloud with `bentoctl`** +`bentoctl` (deprecated) allows deployment to cloud services like AWS Lambda, Google Cloud Run, etc. For more details, refer to the [BentoML documentation](https://docs.bentoml.org). -This summary encapsulates the key aspects of deploying models locally with BentoML, ensuring that critical technical information is retained while maintaining conciseness. +**Conclusion** +BentoML provides a streamlined approach to model deployment, from local testing to production environments, with flexibility for various deployment scenarios. For further details, consult the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-bentoml/#zenml.integrations.bentoml.model_deployers.bentoml_model_deployer). ================================================== === File: docs/book/component-guide/model-deployers/custom.md === -### Develop a Custom Model Deployer +### Summary: Developing a Custom Model Deployer in ZenML -ZenML provides a `Model Deployer` stack component for deploying and managing machine-learning models. It interacts with deployment tools and can serve as a model registry, allowing users to list, suspend, resume, or delete deployed models. +ZenML provides a `Model Deployer` stack component for deploying and managing machine learning models. This component interacts with various deployment tools and can serve as a model registry, allowing users to list, suspend, resume, or delete deployed models. -#### Base Abstraction - -The `Model Deployer` is built on three criteria: - -1. **Efficient Deployment**: Holds configuration attributes for interacting with remote model serving tools. -2. **Continuous Deployment**: Implements logic to update existing model servers instead of creating new ones (via the `deploy_model` method). -3. **BaseService Registry**: Acts as a registry for remote model servers, allowing recreation of their configurations (e.g., as Kubernetes resource annotations). - -The model deployer also manages the lifecycle of remote model servers with methods like `stop_model_server`, `start_model_server`, and `delete_model_server`. +#### Key Criteria for Model Deployer: +1. **Efficient Deployment**: Must handle stack-related configurations for remote model serving tools. +2. **Continuous Deployment**: Implements logic to update existing model servers rather than creating new ones for each version (via `deploy_model`). +3. **BaseService Registry**: Acts as a registry for remote model servers, allowing re-creation of `BaseService` instances from persisted configurations. -#### Interface +#### Interface Overview: +The `BaseModelDeployer` class defines essential abstract methods for model deployment and lifecycle management: ```python from abc import ABC, abstractmethod from typing import Dict, Optional, Type from uuid import UUID -from zenml.enums import StackComponentType from zenml.services import BaseService, ServiceConfig from zenml.stack import StackComponent, StackComponentConfig, Flavor -DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT = 300 - class BaseModelDeployerConfig(StackComponentConfig): - """Base class for all ZenML model deployer configurations.""" + """Base class for model deployer configurations.""" class BaseModelDeployer(StackComponent, ABC): @abstractmethod - def perform_deploy_model(self, id: UUID, config: ServiceConfig, timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT) -> BaseService: + def perform_deploy_model(self, id: UUID, config: ServiceConfig) -> BaseService: """Deploy a model.""" - @staticmethod - @abstractmethod - def get_model_server_info(service: BaseService) -> Dict[str, Optional[str]]: - """Extract model server properties.""" - @abstractmethod - def perform_stop_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT, force: bool = False) -> BaseService: + def perform_stop_model(self, service: BaseService) -> BaseService: """Stop a model server.""" @abstractmethod - def perform_start_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT) -> BaseService: + def perform_start_model(self, service: BaseService) -> BaseService: """Start a model server.""" @abstractmethod - def perform_delete_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT, force: bool = False) -> None: + def perform_delete_model(self, service: BaseService) -> None: """Delete a model server.""" - -class BaseModelDeployerFlavor(Flavor): - @property - @abstractmethod - def name(self): - """Returns the flavor name.""" - - @property - def type(self) -> StackComponentType: - return StackComponentType.MODEL_DEPLOYER - - @property - def config_class(self) -> Type[BaseModelDeployerConfig]: - return BaseModelDeployerConfig - - @property - @abstractmethod - def implementation_class(self) -> Type[BaseModelDeployer]: - """Returns the implementing class.""" ``` -#### Building Custom Model Deployers - -To create a custom model deployer flavor: - +#### Building Custom Model Deployers: +To create a custom model deployer: 1. Inherit from `BaseModelDeployer` and implement the abstract methods. 2. Create a configuration class inheriting from `BaseModelDeployerConfig`. -3. Combine both by inheriting from `BaseModelDeployerFlavor` and provide a `name`. -4. Create a service class inheriting from `BaseService`. - -Register the flavor using: +3. Combine both in a class inheriting from `BaseModelDeployerFlavor`, providing a `name`. +4. Implement a service class inheriting from `BaseService`. +Register the custom flavor using: ```shell zenml model-deployer flavor register ``` Example registration: - ```shell zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor ``` -#### Important Notes - -- The `CustomModelDeployerFlavor` is utilized during flavor creation. -- The `CustomModelDeployerConfig` is used for validating user input during registration. -- The `CustomModelDeployer` is invoked when the component is in use, allowing separation of configuration and implementation. +#### Important Notes: +- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. +- The `CustomModelDeployerFlavor` is used during flavor creation, while `CustomModelDeployerConfig` validates user inputs during registration. +- The actual `CustomModelDeployer` is utilized when the component is in action, allowing separation of configuration and implementation. -For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-model_deployers/#zenml.model_deployers.base_model_deployer.BaseModelDeployer). +This structure allows for flexible and efficient model deployment management within ZenML workflows. ================================================== === File: docs/book/component-guide/model-deployers/seldon.md === -### Summary: Deploying Models to Kubernetes with Seldon Core +### Summary of Deploying Models to Kubernetes with Seldon Core **Overview:** -Seldon Core is a production-grade model serving platform for deploying machine learning models as REST/GRPC microservices. It offers features like monitoring, logging, model explainers, outlier detection, and advanced deployment strategies (A/B testing, canary deployments). It simplifies real-time inference with built-in model server implementations compatible with standard ML model packaging formats. +Seldon Core is a production-grade model serving platform that enables deployment of machine learning models as REST/GRPC microservices. Key features include monitoring, logging, model explainers, outlier detection, and advanced deployment strategies like A/B testing and canary deployments. It simplifies serving models for real-time inference with built-in support for standard ML model packaging formats. -**Key Points:** -- **Compatibility:** Seldon Core does not support deployment on MacOS. -- **Use Cases:** Ideal for advanced Kubernetes deployments, lifecycle management without downtime, enhanced API interactions, and complex deployment processes. -- **Prerequisites for Deployment:** - 1. Access to a Kubernetes cluster (recommended to use a Service Connector). - 2. Seldon Core must be pre-installed in the cluster. - 3. Models should be stored in persistent shared storage (e.g., AWS S3, GCS). - -**Installation Steps:** -1. **Install Seldon Core:** - ```bash - zenml integration install seldon -y - ``` +**Important Notes:** +- **MacOS Support:** The Seldon Core model deployer is not supported on MacOS. + +**When to Use Seldon Core:** +- For deploying models on Kubernetes. +- To manage model lifecycle with zero downtime. +- For advanced API endpoints and deployment strategies. +- When needing a customizable deployment process with advanced inference graphs. -2. **Set Up Kubernetes Access:** - Configure access to the EKS cluster: +**Deployment Prerequisites:** +1. Access to a Kubernetes cluster (recommended to use a Service Connector). +2. Seldon Core must be preinstalled in the target Kubernetes cluster. +3. Models must be stored in persistent shared storage accessible from the Kubernetes cluster. + +**Installation Steps for Seldon Core on EKS:** +1. Configure EKS cluster access: ```bash aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks ``` - -3. **Install Istio:** +2. Install Istio: ```bash curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh - cd istio-1.5.0/ bin/istioctl manifest apply --set profile=demo ``` - -4. **Configure Istio Gateway:** +3. Set up Istio gateway: ```bash curl https://raw.githubusercontent.com/SeldonIO/seldon-core/master/notebooks/resources/seldon-gateway.yaml | kubectl apply -f - ``` - -5. **Install Seldon Core:** +4. Install Seldon Core: ```bash helm install seldon-core seldon-core-operator \ --repo https://storage.googleapis.com/seldon-charts \ @@ -6290,46 +6220,19 @@ Seldon Core is a production-grade model serving platform for deploying machine l --set istio.enabled=true \ --namespace seldon-system ``` - -6. **Deploy a Model:** +5. Test installation: ```bash kubectl apply -f iris.yaml ``` - Example `iris.yaml`: - ```yaml - apiVersion: machinelearning.seldon.io/v1 - kind: SeldonDeployment - metadata: - name: iris-model - namespace: default - spec: - name: iris - predictors: - - graph: - implementation: SKLEARN_SERVER - modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris - name: classifier - name: default - replicas: 1 - ``` - -7. **Get Ingress Host:** - ```bash - export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') - ``` - -8. **Test Prediction API:** - ```bash - curl -X POST http://$INGRESS_HOST/seldon/default/iris-model/api/v1.0/predictions \ - -H 'Content-Type: application/json' \ - -d '{ "data": { "ndarray": [[1,2,3,4]] } }' - ``` - -**Service Connector:** -Use Service Connectors for authentication with remote Kubernetes clusters. Options include AWS, GCP, Azure, or generic Kubernetes connectors. +**Service Connector Setup:** +To authenticate to a remote Kubernetes cluster, use Service Connectors for secure access management. Depending on your cloud provider, register the appropriate Service Connector: +```bash +zenml service-connector register --type --resource-type kubernetes-cluster --resource-name --auto-configure +``` **Model Deployer Registration:** +Register the Seldon Core Model Deployer: ```bash zenml model-deployer register --flavor=seldon \ --kubernetes_namespace= \ @@ -6337,51 +6240,71 @@ zenml model-deployer register --flavor=seldon \ ``` **Configuration Options:** +Within `SeldonDeploymentConfig`, configure: - `model_name`: Name of the model. - `replicas`: Number of replicas. - `implementation`: Type of Seldon server (e.g., `SKLEARN_SERVER`). - `parameters`: Optional parameters for deployment. -- `resources`: Resource allocation for the model. +- `resources`: Resource allocation (CPU and memory). +- `serviceAccount`: Name of the Service Account for deployment. **Custom Code Deployment:** -Define a custom prediction function to deploy pre- and post-processing code with the model. Example: +Define a custom prediction function and use `seldon_custom_model_deployer_step` to deploy: ```python def custom_predict(model, request): # Custom prediction logic return predictions + +seldon_custom_model_deployer_step( + model=model, + predict_function="", + service_config=SeldonDeploymentConfig( + model_name="", + replicas=1, + implementation="custom", + resources=SeldonResourceRequirements( + limits={"cpu": "200m", "memory": "250Mi"} + ), + ), +) ``` -**Advanced Deployment:** -For deploying custom models, refer to the Seldon Core documentation on custom Python models. +**Advanced Custom Code Deployment:** +For more complex deployments, create a custom class and step. Refer to Seldon Core documentation for details on custom Python models. -This summary encapsulates the essential steps and configurations for deploying models using Seldon Core on Kubernetes, ensuring that critical information is preserved while maintaining conciseness. +This summary provides a concise overview of deploying models using Seldon Core, including installation, configuration, and deployment strategies, ensuring that critical information is preserved. ================================================== === File: docs/book/component-guide/model-deployers/mlflow.md === -### MLflow Model Deployer Documentation Summary +### Summary: Deploying Models Locally with MLflow -**Overview**: The MLflow Model Deployer allows for local deployment and management of MLflow models on a local MLflow server. Currently, it is intended for development environments and is not production-ready. +**MLflow Model Deployer Overview** +- The MLflow Model Deployer is part of the ZenML integration for deploying and managing MLflow models on a local MLflow server. +- Currently, it is intended for local development and is not production-ready. -**Use Cases**: -- Deploy models locally for real-time predictions. -- Simplified deployment without complex infrastructure like Kubernetes. +**Use Cases** +- Ideal for local model deployment and real-time predictions without complex infrastructure (e.g., Kubernetes). +- Not suitable for complex deployment scenarios; consider other Model Deployer flavors for such cases. -**Installation**: -To use the MLflow Model Deployer, install the MLflow integration with: -```bash -zenml integration install mlflow -y -``` -Register the model deployer: -```bash -zenml model-deployer register mlflow_deployer --flavor=mlflow -``` -This sets up a local MLflow server as a daemon. +**Deployment Steps** +1. **Install MLflow Integration:** + ```bash + zenml integration install mlflow -y + ``` + +2. **Register MLflow Model Deployer:** + ```bash + zenml model-deployer register mlflow_deployer --flavor=mlflow + ``` -**Deployment Steps**: -1. **Deploy a Logged Model**: - If the model URI is known: +**Deploying a Logged Model** +- Ensure the model is logged in the MLflow experiment tracker. +- Use the model URI from the artifact path or model registry. + +**Example Code for Deployment:** +1. **Known Model URI:** ```python from zenml import step, get_step_context from zenml.client import Client @@ -6392,10 +6315,10 @@ This sets up a local MLflow server as a daemon. model_deployer = zenml_client.active_stack.model_deployer mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", - description="An example of deploying a model", + description="Deploying a model using MLflow", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, - model_uri="runs://model" or "models://", + model_uri="runs://model", model_name="model", workers=1, mlserver=False, @@ -6405,8 +6328,7 @@ This sets up a local MLflow server as a daemon. return service ``` -2. **Deploy Without Known URI**: - If the model URI is not known: +2. **Unknown Model URI:** ```python from zenml import step, get_step_context from zenml.client import Client @@ -6423,15 +6345,14 @@ This sets up a local MLflow server as a daemon. ) experiment_tracker.configure_mlflow() client = MlflowClient() - model_name = "model" - model_uri = artifact_utils.get_artifact_uri(run_id=mlflow_run_id, artifact_path=model_name) + model_uri = artifact_utils.get_artifact_uri(run_id=mlflow_run_id, artifact_path="model") mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", - description="An example of deploying a model", + description="Deploying a model using MLflow", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, model_uri=model_uri, - model_name=model_name, + model_name="model", workers=1, mlserver=False, timeout=300, @@ -6440,17 +6361,11 @@ This sets up a local MLflow server as a daemon. return service ``` -**Configuration Options**: -- `name`, `description`, `pipeline_name`, `pipeline_step_name`: Metadata for the deployment. -- `model_name`, `model_version`: Specify the model to deploy. -- `silent_daemon`, `blocking`: Control daemon behavior. -- `model_uri`: Path or identifier for the model. -- `workers`: Number of prediction server workers. -- `mlserver`: Boolean to start as MLServer instance. -- `timeout`: Timeout for server start/stop. +**Configuration Options for `MLFlowDeploymentService`:** +- `name`, `description`, `pipeline_name`, `pipeline_step_name`, `model_name`, `model_uri`, `workers`, `mlserver`, `timeout`. -**Inference**: -1. **Load a Prediction Service**: +**Running Inference on Deployed Model** +1. **Load Prediction Service:** ```python import json import requests @@ -6469,7 +6384,7 @@ This sets up a local MLflow server as a daemon. return response.json() ``` -2. **Run Inference in the Same Pipeline**: +2. **Use Service for Inference:** ```python from typing_extensions import Annotated import numpy as np @@ -6482,7 +6397,7 @@ This sets up a local MLflow server as a daemon. return prediction.argmax(axis=-1) ``` -For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_deployers). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_deployers). ================================================== @@ -6490,35 +6405,31 @@ For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/int ### Azure Container Registry Overview -The Azure Container Registry (ACR) is a built-in container registry option in ZenML that utilizes Azure's services for storing container images. +The Azure Container Registry (ACR) is a built-in container registry for ZenML, allowing storage of container images on Azure. -#### When to Use ACR -Use ACR if: +#### When to Use +Utilize ACR if: - Your stack components require pulling or pushing container images. - You have access to Azure. #### Deployment Steps 1. Go to the [Azure portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). -2. Choose a subscription, resource group, location, and registry name. -3. Click `Review + Create` to create the registry. +2. Select a subscription, resource group, location, and registry name. +3. Click `Review + Create`. -#### Registry URI Format -The ACR URI format is: +#### Finding the Registry URI +The URI format is: ```shell .azurecr.io ``` -Example URIs: -- zenmlregistry.azurecr.io -- myregistry.azurecr.io - To find your registry URI: - Search for `container registries` in the Azure portal. -- Use the registry name to format the URI. +- Use the registry name to construct the URI. -#### Using ACR +#### Usage Prerequisites: - Docker installed and running. -- Obtain the registry URI. +- Registry URI obtained from the previous section. Register the container registry: ```shell @@ -6528,49 +6439,51 @@ zenml stack update -c #### Authentication Methods Authentication is required to use ACR: -1. **Local Authentication** (quick setup for local use): - - Install Azure CLI. - - Log in to the registry: - ```shell - az acr login --name= - ``` - **Note:** Local authentication is not portable across environments. -2. **Azure Service Connector** (recommended for production): - - Register a service connector: - ```sh - zenml service-connector register --type azure -i - ``` - - Non-interactive registration using Service Principal: - ```sh - zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type docker-registry --resource-id - ``` +**Local Authentication** (quick setup): +- Uses local Docker client authentication. +- Requires Azure CLI installation. +- Log in to the registry: +```shell +az acr login --name= +``` +*Note: Local authentication is not portable across environments.* + +**Azure Service Connector** (recommended): +- Provides auto-configuration and security. +- Register a service connector: +```sh +zenml service-connector register --type azure -i +``` +- Non-interactive registration using Service Principal: +```sh +zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type docker-registry --resource-id +``` #### Connecting ACR to Service Connector -After setting up a service connector, register and connect the ACR: +After setting up the service connector, register and connect the ACR: ```sh zenml container-registry register -f azure --uri= zenml container-registry connect -i ``` -For non-interactive connection: +*Non-interactive connection:* ```sh zenml container-registry connect --connector ``` -#### Using ACR in ZenML Stack -To register and set a stack with the new container registry: +#### Final Steps +To use the Azure Container Registry in a ZenML Stack: ```sh zenml stack register -c ... --set ``` -#### Local Docker Client Authentication -To temporarily authenticate your local Docker client: +For local Docker CLI access to the remote registry: ```sh zenml service-connector login --resource-type docker-registry --resource-id ``` -#### Additional Resources -For more details on configurable attributes of the Azure container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). +### Additional Resources +For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). ================================================== @@ -6578,18 +6491,17 @@ For more details on configurable attributes of the Azure container registry, ref ### GitHub Container Registry Overview -The GitHub Container Registry, integrated with ZenML, is designed for storing container images. +The GitHub Container Registry, integrated with ZenML, is used for storing container images. #### When to Use -Utilize the GitHub container registry if: -- Your stack components require pulling or pushing container images. -- You are using GitHub for your projects. If not, consider other container registry options. +- Required when components of your stack need to pull or push container images. +- Ideal for projects hosted on GitHub. #### Deployment -The GitHub container registry is enabled by default upon creating a GitHub account. +- Automatically enabled upon creating a GitHub account. -#### Finding the Registry URI -The URI format is: +#### Registry URI Format +The URI follows this format: ```shell ghcr.io/ ``` @@ -6598,11 +6510,12 @@ ghcr.io/ - `ghcr.io/my-username` - `ghcr.io/my-organization` +To find your registry URI, replace `` with your GitHub username or organization name. + #### Usage Requirements -To use the GitHub container registry, ensure you have: -- Docker installed and running. -- The correct registry URI. -- A configured Docker client for pulling and pushing images. Follow [this guide](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry) to create a personal access token and log in. +- **Docker**: Must be installed and running. +- **Registry URI**: Refer to the URI format above. +- **Docker Client Configuration**: Follow [this guide](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry) to create a personal access token and log in. #### Registering the Container Registry To register and update your active stack, use: @@ -6614,87 +6527,90 @@ zenml container-registry register \ zenml stack update -c ``` -For detailed attributes and configurations of the GitHub container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). +For additional details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/gcp.md === -### Google Cloud Container Registry Summary +### Google Cloud Container Registry Overview -**Overview**: The Google Cloud Container Registry is integrated with ZenML and utilizes the Google Artifact Registry. Note that Google Container Registry will be deprecated in favor of Artifact Registry, which will fully replace it by March 18, 2025. +The Google Cloud Container Registry, integrated with ZenML, utilizes the Google Artifact Registry. **Important:** Google Container Registry is being phased out in favor of Artifact Registry. After May 15, 2024, Artifact Registry will host images for the gcr.io domain, and Container Registry will be shut down by March 18, 2025. -**When to Use**: Use the GCP container registry if: +### When to Use +Use the GCP container registry if: - Your stack components require pulling or pushing container images. - You have access to GCP. -**Deployment Steps**: -1. Enable the Google Artifact Registry [here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). -2. Create a `Docker` repository [here](https://console.cloud.google.com/artifacts). +### Deployment Steps +1. Enable Google Artifact Registry [here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). +2. Create a Docker repository [here](https://console.cloud.google.com/artifacts). -**Registry URI Format**: +### Registry URI Format +The GCP container registry URI format is: ```shell -docker.pkg.dev// ``` -**Example URIs**: +**Examples:** ``` europe-west1-docker.pkg.dev/zenml/my-repo southamerica-east1-docker.pkg.dev/zenml/zenml-test ``` -**Usage Requirements**: -- Install and run Docker. -- Obtain the registry URI as described above. - -**Registering the Container Registry**: +### Usage +To use the GCP container registry: +1. Ensure Docker is installed and running. +2. Register the container registry: ```shell zenml container-registry register --flavor=gcp --uri= zenml stack update -c ``` +3. Set up authentication. -**Authentication**: -Authentication is necessary for using the GCP Container Registry. Two methods are available: - -1. **Local Authentication** (Quick Start): - - Requires GCP CLI installed. - - Configure Docker for Google Container Registry: - ```shell - gcloud auth configure-docker - ``` - - For Google Artifact Registry: - ```shell - gcloud auth configure-docker -docker.pkg.dev - ``` +### Authentication Methods +Authentication is required to use the GCP Container Registry. Two methods are available: - **Note**: Local authentication is not portable across environments. +#### Local Authentication +- Quick setup using local Docker client credentials. +- Requires GCP CLI installation. +- Configure Docker for Google Container Registry: +```shell +gcloud auth configure-docker +``` +- For Google Artifact Registry: +```shell +gcloud auth configure-docker -docker.pkg.dev +``` +**Note:** Local authentication is not portable across environments. -2. **GCP Service Connector** (Recommended): - - Provides auto-configuration and better security. - - Register a service connector: - ```sh - zenml service-connector register --type gcp -i - ``` - - For auto-configuration: - ```sh - zenml service-connector register --type gcp --resource-type docker-registry --auto-configure - ``` +#### GCP Service Connector (Recommended) +- Provides auto-configuration and security best practices. +- Register a GCP Service Connector: +```sh +zenml service-connector register --type gcp -i +``` +- Non-interactive registration: +```sh +zenml service-connector register --type gcp --resource-type docker-registry --auto-configure +``` -**Connecting the GCP Container Registry**: +### Connecting GCP Container Registry +To connect the GCP Container Registry to a GCR registry: ```sh -zenml container-registry register -f gcp --uri= zenml container-registry connect -i ``` -For a non-interactive connection: +For non-interactive connection: ```sh zenml container-registry connect --connector ``` -**Using the GCP Container Registry in ZenML**: +### Final Steps +To use the GCP Container Registry in a ZenML Stack: ```sh zenml stack register -c ... --set ``` -For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). ================================================== @@ -6704,34 +6620,32 @@ For more details on configurable attributes, refer to the [SDK Docs](https://sdk **Overview**: DockerHub is a built-in container registry in ZenML for storing container images. -#### When to Use -- Use DockerHub if: - - Your stack components need to pull or push images. - - You have a DockerHub account. +**When to Use**: +- If components of your stack need to pull/push images. +- You have a DockerHub account. -#### Deployment Steps -1. **Create a DockerHub Account**: Required to use the registry. -2. **Repository Type**: - - By default, images are published to a **public** repository. - - For a **private** repository, create one on DockerHub before running the pipeline. +**Deployment**: +1. Create a DockerHub account. +2. Images are published in a **public** repository by default. For a **private** repository, create one on DockerHub before running the pipeline. +3. The repository name depends on the orchestrator or step operator used in your stack. -#### Registry URI Format -The DockerHub registry URI can be one of the following: +**Registry URI Format**: +The DockerHub registry URI can be: ```shell # or docker.io/ ``` **Examples**: -- `zenml` -- `my-username` -- `docker.io/zenml` -- `docker.io/my-username` +- zenml +- my-username +- docker.io/zenml +- docker.io/my-username -**Finding Your URI**: +**Finding the URI**: - Use your DockerHub account name to construct the URI. -#### Usage +**Usage**: 1. Ensure Docker is installed and running. 2. Register the container registry: ```shell @@ -6746,10 +6660,9 @@ zenml stack update -c ```shell docker login ``` -*Use your account name and password or a personal access token.* +Use your DockerHub account name and password or a personal access token. -#### Additional Information -For more details and configurable attributes of the DockerHub container registry, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). +**Additional Information**: For configurable attributes of the DockerHub container registry, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). ================================================== @@ -6757,11 +6670,11 @@ For more details and configurable attributes of the DockerHub container registry ### Container Registries -Container registries are crucial for storing Docker images used in remote MLOps stacks, enabling the containerization of machine learning pipeline code for isolated execution. +Container registries are crucial for storing Docker images used in machine learning pipelines within remote MLOps stacks. They enable the containerization of pipeline code, ensuring a portable and isolated execution environment. #### When to Use -A container registry is necessary when components of your stack need to push or pull container images, applicable to most of ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation of specific components to determine if a container registry is required. +A container registry is necessary when components of your stack need to push or pull container images. This applies to most of ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation for specific components to determine if a container registry is required. #### Container Registry Flavors @@ -6770,16 +6683,16 @@ ZenML supports several container registry flavors: - **Default Flavor**: Accepts any URI without validation, suitable for local or unsupported remote registries. - **Specific Flavors**: Validates the URI and performs checks to ensure push capabilities. -**Recommendation**: Use specific container registry flavors for enhanced URI validation. +**Recommendation**: Use specific container registry flavors for additional URI validation. -| Container Registry | Flavor | Integration | URI Example | -|--------------------|---------|--------------|---------------------------------------| -| [DefaultContainerRegistry](default.md) | `default` | _built-in_ | - | -| [DockerHubContainerRegistry](dockerhub.md) | `dockerhub` | _built-in_ | docker.io/zenml | -| [GCPContainerRegistry](gcp.md) | `gcp` | _built-in_ | gcr.io/zenml | -| [AzureContainerRegistry](azure.md) | `azure` | _built-in_ | zenml.azurecr.io | -| [GitHubContainerRegistry](github.md) | `github` | _built-in_ | ghcr.io/zenml | -| [AWSContainerRegistry](aws.md) | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | +| Container Registry | Flavor | Integration | URI Example | +|--------------------|---------|-------------|-------------------------------------------| +| DefaultContainerRegistry | `default` | _built-in_ | - | +| DockerHubContainerRegistry | `dockerhub` | _built-in_ | docker.io/zenml | +| GCPContainerRegistry | `gcp` | _built-in_ | gcr.io/zenml | +| AzureContainerRegistry | `azure` | _built-in_ | zenml.azurecr.io | +| GitHubContainerRegistry | `github` | _built-in_ | ghcr.io/zenml | +| AWSContainerRegistry | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | To view available container registry flavors, use the command: @@ -6791,102 +6704,106 @@ zenml container-registry flavor list === File: docs/book/component-guide/container-registries/aws.md === -### Amazon Elastic Container Registry (ECR) with ZenML +### Amazon Elastic Container Registry (ECR) Overview -**Overview**: Amazon ECR is used to store container images in ZenML's AWS integration. +Amazon ECR is a container registry integrated with ZenML's AWS support, allowing storage of container images. -**When to Use**: -- If components of your stack need to pull/push container images. +#### When to Use +- If components of your stack require pulling or pushing container images. - If you have access to AWS ECR. -**Deployment Steps**: -1. **Create a Repository**: - - Go to the [ECR console](https://console.aws.amazon.com/ecr). +#### Deployment Steps +1. **Create an AWS Account**: ECR is activated automatically. +2. **Create a Repository**: + - Visit the [ECR website](https://console.aws.amazon.com/ecr). - Select the correct region. - - Click on `Create repository` and create a private repository. + - Click on `Create repository` and choose a private repository. -2. **URI Format**: - - The URI format is `.dkr.ecr..amazonaws.com`. - - Example: `123456789.dkr.ecr.eu-west-2.amazonaws.com`. - -3. **Get Registry URI**: - - Find your `Account ID` in the AWS console. - - Choose your region from [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). - - Construct your URI. +#### URI Format +The URI format for AWS ECR is: +``` +.dkr.ecr..amazonaws.com +``` +Example URIs: +``` +123456789.dkr.ecr.eu-west-2.amazonaws.com +``` +To find your URI: +- Get your `Account ID` from the AWS console. +- Select a region from [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). -**Usage**: -- Install the ZenML AWS integration: +#### Usage Requirements +- Install ZenML AWS integration: ```shell zenml integration install aws ``` - Ensure Docker is installed and running. -- Register the container registry: - ```shell - zenml container-registry register --flavor=aws --uri= - ``` -- Update the active stack: - ```shell - zenml stack update -c - ``` +- Obtain the registry URI. -**Authentication Methods**: -- **Local Authentication**: Quick setup using local AWS CLI credentials. Requires AWS CLI installation. - ```shell - aws ecr get-login-password --region | docker login --username AWS --password-stdin - ``` - *Note*: Not portable across environments. +#### Registering the Container Registry +To register and update the active stack: +```shell +zenml container-registry register --flavor=aws --uri= +zenml stack update -c +``` -- **AWS Service Connector (Recommended)**: Provides auto-configuration and better security practices. - - Register a service connector: - ```sh - zenml service-connector register --type aws -i - ``` - - Non-interactive registration: - ```sh - zenml service-connector register --type aws --resource-type docker-registry --auto-configure - ``` +#### Authentication Methods +Authentication is necessary to use AWS ECR: -**Connecting AWS Container Registry**: -- Register and connect to ECR: - ```sh - zenml container-registry register -f aws --uri= - zenml container-registry connect -i - ``` -- Non-interactive connection: - ```sh - zenml container-registry connect --connector - ``` +1. **Local Authentication** (quick setup): + - Requires AWS CLI installed and configured. + - Log in to the container registry: + ```shell + aws ecr get-login-password --region | docker login --username AWS --password-stdin + ``` + - Note: This method is not portable across environments. -**Using in a ZenML Stack**: -- Register and set a stack: - ```sh - zenml stack register -c ... --set - ``` +2. **AWS Service Connector** (recommended): + - Provides auto-configuration and better security. + - Register a service connector: + ```sh + zenml service-connector register --type aws -i + ``` + - Non-interactive registration: + ```sh + zenml service-connector register --type aws --resource-type docker-registry --auto-configure + ``` -**Local Docker Client Authentication**: -- Temporarily authenticate your local Docker client: - ```sh - zenml service-connector login --resource-type docker-registry - ``` +#### Connecting the Container Registry +To connect the AWS container registry to the ECR: +```sh +zenml container-registry connect -i +# or non-interactive +zenml container-registry connect --connector +``` + +#### Using the Container Registry in a ZenML Stack +To register and set a stack: +```sh +zenml stack register -c ... --set +``` + +#### Local Login for Docker CLI +To temporarily authenticate your local Docker client: +```sh +zenml service-connector login --resource-type docker-registry +``` -**Further Information**: For detailed attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). +#### Additional Resources +For more details on configurable attributes of the AWS container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/custom.md === -### Developing a Custom Container Registry in ZenML +### Develop a Custom Container Registry #### Overview -To develop a custom container registry in ZenML, it's essential to understand the framework's component flavor concepts. Refer to the [general guide](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge. +To create a custom container registry in ZenML, first familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction -ZenML's container registries have a basic abstraction with the following key components: +ZenML's container registries have a basic abstraction with a configuration containing a `uri` and a non-abstract `prepare_image_push` method for validation. -- **Base Configuration**: Contains a `uri`. -- **Base Class**: Implements a non-abstract `prepare_image_push` method for validation. - -**Key Classes:** ```python from abc import abstractmethod from typing import Type @@ -6927,45 +6844,45 @@ class BaseContainerRegistryFlavor(Flavor): return BaseContainerRegistry ``` -#### Steps to Create a Custom Container Registry -1. **Create a Custom Class**: Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for any pre-push validations. -2. **Define Configuration**: Create a class inheriting from `BaseContainerRegistryConfig` for additional configurations. -3. **Combine Implementation and Configuration**: Inherit from `BaseContainerRegistryFlavor`. +#### Steps to Build a Custom Container Registry +1. **Create a Class**: Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for any pre-push checks. +2. **Configuration Class**: Inherit from `BaseContainerRegistryConfig` for additional configuration. +3. **Flavor Class**: Inherit from `BaseContainerRegistryFlavor` to combine implementation and configuration. -**Registering the Flavor**: +**Register the Flavor**: ```shell zenml container-registry flavor register ``` -Example: + +For example: ```shell zenml container-registry flavor register flavors.my_flavor.MyContainerRegistryFlavor ``` #### Important Notes -- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- After registration, list available flavors: +- Initialize ZenML at the root of your repository to ensure proper resolution of the flavor class. +- List available flavors with: ```shell zenml container-registry flavor list ``` #### Workflow Integration -- **CustomContainerRegistryFlavor** is used during flavor creation via CLI. -- **CustomContainerRegistryConfig** is utilized during stack component registration for user input validation. -- **CustomContainerRegistry** is invoked when the component is in use, allowing separation of configuration from implementation. +- **CustomContainerRegistryFlavor** is used during flavor creation. +- **CustomContainerRegistryConfig** validates values during registration. +- **CustomContainerRegistry** is utilized when the component is in use, allowing separation of configuration from implementation. -This design enables registration of flavors and components without needing all dependencies installed locally, provided the flavor and config are implemented in separate modules. +This design enables registration of flavors and components without needing all dependencies installed locally. ================================================== === File: docs/book/component-guide/container-registries/default.md === -### Summary: Storing Container Images Locally with ZenML +### Default Container Registry Overview -#### Default Container Registry -The Default Container Registry in ZenML supports any container registry URI format and is ideal for local or unsupported remote registries. +The Default Container Registry in ZenML is a built-in option that supports local and certain remote container registries. It is ideal for local setups or remote registries not covered by other flavors. #### Local Registry URI Format -Use the following format for local container registry URIs: +For a local container registry, use the following URI format: ```shell localhost: # Examples: @@ -6974,61 +6891,57 @@ localhost:8000 localhost:9999 ``` -#### Usage -To utilize the Default Container Registry: -1. Ensure Docker is installed and running. +#### Usage Steps +1. Ensure **Docker** is installed and running. 2. Register the container registry: -```shell -zenml container-registry register --flavor=default --uri= -zenml stack update -c -``` + ```shell + zenml container-registry register --flavor=default --uri= + ``` +3. Update the active stack: + ```shell + zenml stack update -c + ``` #### Authentication Methods -For private registries, configure authentication: -- **Local Authentication**: Quick setup using local Docker credentials. Log in with: -```shell -docker login --username --password-stdin -``` -*Note: This method is not portable across environments.* - -- **Docker Service Connector (Recommended)**: Use for better management and portability. Register with: -```sh -zenml service-connector register --type docker -i -``` -Or non-interactively: -```sh -zenml service-connector register --type docker --username= --password= -``` +- **Local Authentication**: Quick setup using local Docker client credentials. Log in with: + ```shell + docker login --username --password-stdin + ``` + *Note: This method is not portable across environments.* -To check available resources: -```sh -zenml service-connector list-resources --connector-type docker --resource-id -``` +- **Docker Service Connector (Recommended)**: Use for private registries. Register with: + ```shell + zenml service-connector register --type docker -i + ``` + Or non-interactively: + ```shell + zenml service-connector register --type docker --username= --password= + ``` -#### Connecting the Registry -After setting up a Docker Service Connector, register and connect the Default Container Registry: -```sh +#### Connecting to a Registry +After setting up a Docker Service Connector, register the container registry: +```shell zenml container-registry register -f default --uri= zenml container-registry connect -i ``` -Non-interactive connection: -```sh +For non-interactive connection: +```shell zenml container-registry connect --connector ``` -#### Final Steps -Use the Default Container Registry in a ZenML Stack: -```sh +#### Using the Registry in a ZenML Stack +To register and set a stack with the new container registry: +```shell zenml stack register -c ... --set ``` -For local Docker client authentication with the remote registry: -```sh +#### Local Client Authentication +If you need to interact with the remote registry via Docker CLI, temporarily authenticate using: +```shell zenml service-connector login ``` -#### Additional Information -For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.default_container_registry.DefaultContainerRegistry). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.default_container_registry.DefaultContainerRegistry). ================================================== @@ -7036,38 +6949,33 @@ For more details on configurable attributes, refer to the [SDK Docs](https://sdk ### Local Image Builder Overview -The Local Image Builder in ZenML utilizes the local Docker installation on your machine to build container images. It employs the official Docker Python library, which retrieves authentication credentials from the default location: `$HOME/.docker/config.json`. To use a different configuration directory, set the `DOCKER_CONFIG` environment variable: +The Local Image Builder is a built-in feature of ZenML that utilizes the local Docker installation on your machine to create container images. It employs the official Docker Python library for building and pushing images, which retrieves authentication credentials from `$HOME/.docker/config.json`. To use a different configuration directory, set the `DOCKER_CONFIG` environment variable: ```shell export DOCKER_CONFIG=/path/to/config_dir ``` - Ensure the specified directory contains a `config.json` file. ### When to Use Use the Local Image Builder if: -- You can install and run Docker on your machine. -- You want to utilize remote components requiring containerization without additional infrastructure setup. - -### Deployment - -The Local Image Builder is included with ZenML and requires no extra setup. +- You can install and run Docker on your client machine. +- You want to use remote components requiring containerization without additional infrastructure setup. -### Usage +### Deployment and Usage -Prerequisites: -- Docker must be installed and running. -- The Docker client should be authenticated to push to your chosen container registry. +The Local Image Builder is included with ZenML and requires no extra setup. To use it, ensure: +- Docker is installed and running. +- The Docker client is authenticated to push to your desired container registry. -To register and use the Local Image Builder: +To register the image builder and create a new stack, use the following commands: ```shell zenml image-builder register --flavor=local zenml stack register -i ... --set ``` -For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). +For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). ================================================== @@ -7075,32 +6983,33 @@ For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenm ### Google Cloud Image Builder Overview -The Google Cloud Image Builder, part of the ZenML `gcp` integration, utilizes [Google Cloud Build](https://cloud.google.com/build) for building container images. +The Google Cloud Image Builder is a component of the ZenML `gcp` integration that utilizes [Google Cloud Build](https://cloud.google.com/build) for building container images. -#### Use Cases -Use the Google Cloud Image Builder if: +#### When to Use +Utilize the Google Cloud Image Builder if: - You cannot install or use [Docker](https://www.docker.com) locally. - You are already using Google Cloud Platform (GCP). -- Your stack integrates with other GCP components like [GCS Artifact Store](../artifact-stores/gcp.md) or [Vertex Orchestrator](../orchestrators/vertex.md). +- Your stack primarily consists of GCP components like [GCS Artifact Store](../artifact-stores/gcp.md) or [Vertex Orchestrator](../orchestrators/vertex.md). -#### Deployment Requirements -To deploy the Google Cloud Image Builder: -1. Enable Google Cloud Build APIs in your GCP project. -2. Install the ZenML `gcp` integration: +#### Deployment Steps +1. **Enable Google Cloud Build APIs** on your GCP project. +2. **Install ZenML GCP Integration**: ```shell zenml integration install gcp ``` -3. Set up a GCP Artifact Store and a GCP container registry. -4. Optionally specify the GCP project ID and service account for builds. +3. **Set Up Required Resources**: + - A [GCP Artifact Store](../artifact-stores/gcp.md) for build context. + - A [GCP container registry](../container-registries/gcp.md) for the built image. + - Optionally, specify GCP project ID and service account credentials. #### Configuration Options You can customize: -- Docker image for building (default: `'gcr.io/cloud-builders/docker'`). +- Docker image for build steps (default: `'gcr.io/cloud-builders/docker'`). - Network for the build container. -- Build timeout. +- Build timeout settings. #### Registering the Image Builder -To register the image builder: +To register and use the image builder: ```shell zenml image-builder register \ --flavor=gcp \ @@ -7112,25 +7021,25 @@ zenml stack register -i ... --set ``` #### Authentication Methods -Authentication is essential for using the GCP Image Builder: -1. **Local Authentication**: Quick setup using local Google Cloud CLI credentials. Not portable across environments. -2. **GCP Service Connector (Recommended)**: Provides better security and reusability. Register with: - ```shell - zenml service-connector register --type gcp -i - ``` +Authentication is required to access GCP services: +- **Local Authentication**: Quick setup using local Google Cloud CLI credentials. +- **GCP Service Connector** (recommended): Provides auto-configuration and better security practices. Register with: + ```shell + zenml service-connector register --type gcp -i + ``` #### Connecting the Image Builder After setting up authentication, connect the image builder: ```shell zenml image-builder connect -i ``` -For non-interactive connection: +For a non-interactive version: ```shell zenml image-builder connect --connector ``` #### Using GCP Credentials -You can also use a GCP Service Account Key for authentication: +Alternatively, use a GCP Service Account Key: ```shell zenml image-builder register \ --flavor=gcp \ @@ -7144,15 +7053,15 @@ zenml stack register -i ... --set ``` #### Caveats -- Google Cloud Build uses a network called `cloudbuild` for executing build steps, which provides Application Default Credentials (ADC). -- The default network option allows access to other GCP services, useful for installing private dependencies from GCP Artifact Registry. -- If using private dependencies, customize the Docker base image: - ```dockerfile - FROM zenmldocker/zenml:latest - RUN pip install keyrings.google-artifactregistry-auth - ``` +- Google Cloud Build uses a default network (`cloudbuild`) for builds, which allows access to GCP services. +- To install private dependencies from GCP Artifact Registry, use a custom base image with `keyrings.google-artifactregistry-auth`: + ```dockerfile + FROM zenmldocker/zenml:latest + RUN pip install keyrings.google-artifactregistry-auth + ``` +- Specify the ZenML version in the base image tag for better version control. -**Note**: Specify the ZenML version in the base image tag for better version control. +This summary provides essential details for deploying and using the Google Cloud Image Builder within ZenML, including setup, authentication, and configuration options. ================================================== @@ -7160,89 +7069,87 @@ zenml stack register -i ... --set ### Kaniko Image Builder Overview -The Kaniko image builder, part of the ZenML `kaniko` integration, utilizes [Kaniko](https://github.com/GoogleContainerTools/kaniko) for building container images. +The Kaniko image builder is part of the ZenML `kaniko` integration, utilizing [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images. -#### When to Use -- If you cannot install or use [Docker](https://www.docker.com) on your machine. -- If you are familiar with Kubernetes. +#### When to Use Kaniko +- If you cannot install or use [Docker](https://www.docker.com) on your client machine. +- If you are familiar with or already using Kubernetes. #### Deployment Requirements - A deployed Kubernetes cluster. -- ZenML `kaniko` integration installed: +- ZenML `kaniko` integration installed: ```shell zenml integration install kaniko ``` - [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) installed. - A remote container registry as part of your stack. -- Optionally, configure to store build context in an artifact store by setting `store_context_in_artifact_store=True` and including a remote artifact store in your stack. +- Optionally, configure the build context storage in the artifact store by setting `store_context_in_artifact_store=True`. - Optionally, adjust the pod running timeout with `pod_running_timeout`. #### Registering the Image Builder -To register the Kaniko image builder: +To register and use the Kaniko image builder in your active stack: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= [ --pod_running_timeout= ] -# Register and activate a stack zenml stack register -i ... --set ``` -#### Authentication Requirements -The Kaniko build pod must authenticate to: +#### Authentication for Container Registry and Artifact Store +The Kaniko build pod requires authentication to: - Push to the container registry. - Pull from a private parent image registry. - Read from the artifact store if configured. -#### Cloud Provider Specific Configurations +**Setup Instructions by Cloud Provider:** -**AWS:** -1. Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to EKS node IAM role. -2. Register the image builder with required environment variables: - ```shell - zenml image-builder register \ - --flavor=kaniko \ - --kubernetes_context= \ - --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' - ``` +- **AWS:** + - Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to EKS node IAM role. + - Register the image builder with necessary environment variables: + ```shell + zenml image-builder register \ + --flavor=kaniko \ + --kubernetes_context= \ + --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' + ``` -**GCP:** -1. Enable workload identity for your cluster. -2. Create a Google service account and bind it to a Kubernetes service account. -3. Grant permissions to push to GCR and read from GCP bucket. -4. Register the image builder with namespace and service account: - ```shell - zenml image-builder register \ - --flavor=kaniko \ - --kubernetes_context= \ - --kubernetes_namespace= \ - --service_account_name= - ``` +- **GCP:** + - Enable workload identity and create necessary service accounts. + - Grant permissions and register the image builder: + ```shell + zenml image-builder register \ + --flavor=kaniko \ + --kubernetes_context= \ + --kubernetes_namespace= \ + --service_account_name= + ``` -**Azure:** -1. Create a Kubernetes `configmap` for Docker config: - ```shell - kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' - ``` -2. Configure the image builder to mount the `configmap`: - ```shell - zenml image-builder register \ - --flavor=kaniko \ - --kubernetes_context= \ - --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \ - --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]' - ``` +- **Azure:** + - Create a Kubernetes `configmap` for Docker config: + ```shell + kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' + ``` + - Register the image builder with the mounted configmap: + ```shell + zenml image-builder register \ + --flavor=kaniko \ + --kubernetes_context= \ + --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \ + --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]' + ``` #### Additional Parameters for Kaniko Build -You can pass additional parameters via `executor_args`: +You can pass additional parameters using the `executor_args` attribute: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --executor_args='["--label", "key=value"]' ``` -Common flags include: + +**Common Flags:** - `--cache`: Disable caching (`false`). - `--cache-dir`: Directory for cached layers (`/cache`). - `--cache-repo`: Repository for cached layers (`gcr.io/kaniko-project/executor`). @@ -7250,7 +7157,7 @@ Common flags include: - `--cleanup`: Disable cleanup of the working directory (`false`). - `--compressed-caching`: Disable compressed caching (`false`). -For more detailed configurations, refer to the [Kaniko documentation](https://github.com/GoogleContainerTools/kaniko#additional-flags). +For more details, refer to the [Kaniko additional flags](https://github.com/GoogleContainerTools/kaniko#additional-flags). ================================================== @@ -7258,115 +7165,91 @@ For more detailed configurations, refer to the [Kaniko documentation](https://gi ### AWS Image Builder with ZenML -**Overview**: The AWS Image Builder is a component of the ZenML `aws` integration that utilizes [AWS CodeBuild](https://aws.amazon.com/codebuild) for building container images. +**Overview**: The AWS Image Builder, part of the ZenML `aws` integration, utilizes [AWS CodeBuild](https://aws.amazon.com/codebuild) to create container images. #### When to Use - If Docker cannot be installed on your client machine. -- If you are already using AWS. +- If you are already using AWS services. - If your stack includes AWS components like the [S3 Artifact Store](../artifact-stores/s3.md) or [SageMaker Orchestrator](../orchestrators/sagemaker.md). -#### Deployment -For quick deployment, refer to the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). +#### Deployment Options +- For a quick setup, use the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). #### Usage Requirements 1. Install the ZenML `aws` integration: ```shell zenml integration install aws ``` -2. Set up an [S3 Artifact Store](../artifact-stores/s3.md) for build context. -3. Optionally, configure an [AWS container registry](../container-registries/aws.md) for image storage. -4. Create an [AWS CodeBuild project](https://aws.amazon.com/codebuild) in the desired AWS region. Key configuration values include: +2. Set up an [S3 Artifact Store](../artifact-stores/s3.md). +3. Optionally, create an [AWS container registry](../container-registries/aws.md). +4. Create an [AWS CodeBuild project](https://aws.amazon.com/codebuild) in the desired AWS region. Key configurations include: - **Source Type**: `Amazon S3` - **Bucket**: Same as the S3 Artifact Store + - **Environment Type**: `Linux Container` - **Environment Image**: `bentolor/docker-dind-awscli` - **Privileged Mode**: `false` -**Service Role Permissions**: -Ensure the CodeBuild project’s service role has permissions to access the S3 bucket and ECR registry: -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": ["s3:GetObject"], - "Resource": "arn:aws:s3:::/*" - }, - { - "Effect": "Allow", - "Action": ["ecr:*"], - "Resource": "arn:aws:ecr:::repository/" - }, - { - "Effect": "Allow", - "Action": ["ecr:GetAuthorizationToken"], - "Resource": "*" - } - ] -} -``` +5. Ensure the **Service Role** for CodeBuild has permissions for S3 and ECR (if applicable): + ```json + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": ["s3:GetObject"], + "Resource": "arn:aws:s3:::/*" + }, + { + "Effect": "Allow", + "Action": ["ecr:*"], + "Resource": "arn:aws:ecr:::repository/" + }, + { + "Effect": "Allow", + "Action": ["ecr:GetAuthorizationToken"], + "Resource": "*" + } + ] + } + ``` + +6. Optionally, register an [AWS Service Connector](../../how-to/infrastructure-deployment/auth-management/aws-service-connector.md) for build triggering. + +#### Authentication Methods +- **Local Authentication**: Quick setup using local AWS CLI credentials (not portable). +- **AWS Service Connector (recommended)**: Use for better security and multi-component access. Register with: + ```shell + zenml service-connector register --type aws -i + ``` #### Registering the Image Builder -To register the image builder: +To register the AWS Image Builder: ```shell zenml image-builder register \ --flavor=aws \ - --code_build_project= -``` -To register and activate a stack: -```shell -zenml stack register -i ... --set + --code_build_project= \ + --connector ``` -#### Authentication Methods -Authentication is essential for using the AWS Image Builder: - -1. **Local Authentication**: Quick setup using local AWS CLI credentials. Requires AWS CLI installation. - - Not portable across environments; use an AWS Service Connector for portability. - -2. **AWS Service Connector (recommended)**: - - Register using: - ```shell - zenml service-connector register --type aws -i - ``` - - Auto-configure: - ```shell - zenml service-connector register --type aws --resource-type aws-generic --auto-configure - ``` - -**Permissions for CodeBuild**: -Ensure the AWS credentials have permissions to access CodeBuild: -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": ["codebuild:StartBuild", "codebuild:BatchGetBuilds"], - "Resource": "arn:aws:codebuild:::project/" - } - ] -} +To connect an existing Image Builder to a Service Connector: +```shell +zenml image-builder connect --connector ``` #### Customizing AWS CodeBuild Builds -You can customize the image builder with additional attributes: +You can customize builds by setting additional attributes during registration: - `build_image`: Default is `bentolor/docker-dind-awscli`. - `compute_type`: Default is `BUILD_GENERAL1_SMALL`. - `custom_env_vars`: Custom environment variables. -- `implicit_container_registry_auth`: Use implicit (default) or explicit authentication for container registry. +- `implicit_container_registry_auth`: Use implicit (default) or explicit authentication for container registry access. #### Final Steps -To connect an AWS Image Builder to an AWS Service Connector: -```shell -zenml image-builder connect --connector -``` -To use the AWS Image Builder in a ZenML Stack: +Use the AWS Image Builder in a ZenML Stack: ```shell zenml stack register -i ... --set -``` +``` -This summary encapsulates the key points and technical details necessary for understanding and utilizing the AWS Image Builder with ZenML. +This summary captures the essential details for using AWS Image Builder with ZenML, including setup, configuration, and customization options. ================================================== @@ -7375,10 +7258,10 @@ This summary encapsulates the key points and technical details necessary for und ### Develop a Custom Image Builder #### Overview -To create a custom image builder in ZenML, start by understanding the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +To create a custom image builder in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction -The `BaseImageBuilder` is the abstract class for building Docker images. It provides a basic interface for customization: +The `BaseImageBuilder` is an abstract class for building Docker images. It provides a basic interface: ```python from abc import ABC, abstractmethod @@ -7405,11 +7288,11 @@ class BaseImageBuilder(StackComponent, ABC): ``` #### Steps to Create a Custom Image Builder -1. **Subclass `BaseImageBuilder`:** Implement the abstract `build` method to create a Docker image using the provided context. -2. **Configuration Class:** If needed, create a class inheriting from `BaseImageBuilderConfig` to define configuration parameters. -3. **Flavor Class:** Combine the implementation and configuration by inheriting from `BaseImageBuilderFlavor`, ensuring to set a `name` for the flavor. +1. **Subclass `BaseImageBuilder`:** Implement the abstract `build` method to create a Docker image. +2. **Create Configuration Class:** Inherit from `BaseImageBuilderConfig` for any configuration parameters. +3. **Combine Implementation and Configuration:** Inherit from `BaseImageBuilderFlavor` and define a `name` for the flavor. -Register the flavor using the CLI: +To register the flavor, use the CLI: ```shell zenml image-builder flavor register @@ -7421,69 +7304,98 @@ Example registration: zenml image-builder flavor register flavors.my_flavor.MyImageBuilderFlavor ``` -#### Important Notes -- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- After registration, list available flavors: +**Note:** Initialize ZenML at the root of your repository to avoid resolution issues. + +#### Listing Available Flavors +To see your registered flavor: ```shell zenml image-builder flavor list ``` -#### Workflow Integration -- **Flavor Class:** Used during flavor creation via CLI. -- **Config Class:** Validates user input during stack component registration. -- **Image Builder Class:** Activated when the component is used, allowing separation of configuration from implementation. +#### Important Considerations +- The **CustomImageBuilderFlavor** is used during flavor creation. +- The **CustomImageBuilderConfig** validates user input during registration. +- The **CustomImageBuilder** is utilized when the component is in use. + +This design separates flavor configuration from implementation, allowing for registration even if dependencies are not installed locally. #### Custom Build Context -To customize the build context, subclass `BuildContext` and override the `build_context_class` property in your image builder implementation. +If a different build context is needed, subclass `BuildContext` and override the `build_context_class` property in your image builder: + +```python +class MyCustomBuildContext(BuildContext): + # Custom context implementation + +class MyImageBuilder(BaseImageBuilder): + @property + def build_context_class(self) -> Type["MyCustomBuildContext"]: + return MyCustomBuildContext +``` + +This allows for tailored build contexts beyond the default Docker context. ================================================== === File: docs/book/component-guide/image-builders/image-builders.md === -### Image Builders in ZenML +# Image Builders in ZenML + +## Overview +The image builder is crucial for building container images in remote MLOps environments, enabling the execution of machine-learning pipelines. -**Overview**: Image builders are crucial for building container images in remote MLOps stacks, enabling the execution of machine-learning pipelines in various environments. +## When to Use +Use the image builder when components of your stack need to create container images, particularly for ZenML's remote orchestrators, step operators, and model deployers. -**Usage**: An image builder is required when components of the ZenML stack need to create container images, which is common for remote orchestrators, step operators, and some model deployers. +## Image Builder Flavors +ZenML includes a `local` image builder by default, with additional options available through integrations: -**Available Image Builder Flavors**: -- **LocalImageBuilder**: Builds Docker images locally on the client machine. -- **KanikoImageBuilder**: Builds Docker images in Kubernetes using Kaniko. -- **GCPImageBuilder**: Utilizes Google Cloud Build for image creation. -- **AWSImageBuilder**: Uses AWS Code Build for building images. -- **Custom Implementation**: Allows users to extend the image builder abstraction for custom solutions. +| Image Builder | Flavor | Integration | Notes | +|-----------------------|----------|-------------|---------------------------------| +| [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | +| [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes. | +| [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build. | +| [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build. | +| [Custom Implementation](custom.md) | _custom_ | | Create your own image builder. | -**Command to List Image Builder Flavors**: +To view available image builder flavors, use: ```shell zenml image-builder flavor list ``` -**Integration**: Users do not need to interact directly with image builders in their code. The active ZenML stack automatically utilizes the appropriate image builder for any component requiring container image creation. +## Usage +Direct interaction with the image builder is unnecessary. The active ZenML stack automatically utilizes the appropriate image builder for any component requiring container image creation. ================================================== === File: docs/book/component-guide/experiment-trackers/wandb.md === -### Weights & Biases Experiment Tracker Overview +# Weights & Biases Integration with ZenML -The Weights & Biases (W&B) Experiment Tracker is integrated with ZenML to log and visualize data from ML pipeline steps, such as models, parameters, and metrics. It is particularly useful during the iterative ML experimentation phase and can also be adapted for automated pipeline runs. +## Overview +The Weights & Biases (W&B) Experiment Tracker is integrated with ZenML to log and visualize pipeline information such as models, parameters, and metrics. It is particularly useful for tracking experiments during the ML development phase and can also be used in production workflows. -#### Use Cases -- Continue using W&B for tracking results while adopting MLOps workflows with ZenML. -- Gain a visually interactive way to navigate results from ZenML pipelines. -- Share logged artifacts and metrics with team members or stakeholders. +## When to Use +- If you are already using W&B for experiment tracking and want to integrate it with ZenML. +- For visually navigating results from ZenML pipeline runs. +- To share logged artifacts and metrics with teams or stakeholders. -If unfamiliar with W&B, consider using another experiment tracking tool. +Consider other experiment trackers if you are unfamiliar with W&B. -#### Deployment -To deploy the W&B Experiment Tracker, install the integration: +## Deployment +To use the W&B Experiment Tracker, install the integration: ```shell zenml integration install wandb -y ``` -**Authentication Methods:** +### Authentication +Configure the following credentials for W&B: +- `api_key`: Required API key for your W&B account. +- `project_name`: Name of the project for your runs. +- `entity`: Username or team name for sending runs. + +#### Authentication Methods 1. **Basic Authentication** (not recommended for production): ```shell zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb --entity= --project_name= --api_key= @@ -7495,13 +7407,13 @@ zenml integration install wandb -y ```shell zenml secret create wandb_secret --entity= --project_name= --api_key= ``` - Register the experiment tracker: + Register the tracker using the secret: ```shell - zenml experiment-tracker register wandb_tracker --flavor=wandb --entity={{wandb_secret.entity}} --project_name={{wandb_secret.project_name}} --api_key={{wandb_secret.api_key}} ... + zenml experiment-tracker register wandb_tracker --flavor=wandb --entity={{wandb_secret.entity}} --project_name={{wandb_secret.project_name}} --api_key={{wandb_secret.api_key}} ``` -#### Usage -To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and utilize W&B logging: +## Usage +To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and use W&B logging: ```python import wandb @@ -7509,16 +7421,13 @@ from wandb.integration.keras import WandbCallback @step(experiment_tracker="") def tf_trainer(...): - ... model.fit(..., callbacks=[WandbCallback(log_evaluation=True)]) wandb.log({"": metric}) ``` You can dynamically reference the active experiment tracker: - ```python from zenml.client import Client - experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) @@ -7526,17 +7435,16 @@ def tf_trainer(...): ... ``` -#### Weights & Biases UI -Each ZenML step using W&B creates a separate experiment run, viewable in the W&B UI. The URL for a specific run can be accessed via: - +### W&B UI +Each ZenML step using W&B creates a separate experiment run, accessible via the W&B UI. The URL for a specific run can be retrieved as follows: ```python -tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value +last_run = client.get_pipeline("").last_run +tracking_url = last_run.get_step("").run_metadata["experiment_tracker_url"].value print(tracking_url) ``` -#### Additional Configuration -You can customize the W&B experiment tracker settings: - +### Additional Configuration +You can customize the W&B experiment tracker with `WandbExperimentTrackerSettings`: ```python from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings @@ -7547,12 +7455,12 @@ def my_step(...): ... ``` -### Full Code Example -Here’s a concise example of a ZenML pipeline using W&B: - +## Full Code Example +A complete example demonstrating the integration: ```python from zenml import pipeline, step from zenml.client import Client +from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset import wandb @@ -7567,21 +7475,22 @@ def prepare_data(): @step(experiment_tracker=experiment_tracker.name) def train_model(train_dataset, eval_dataset): model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) - training_args = TrainingArguments(output_dir="./results", num_train_epochs=3, per_device_train_batch_size=16, logging_dir="./logs") + training_args = TrainingArguments(output_dir="./results", num_train_epochs=3, per_device_train_batch_size=16) trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() wandb.log({"final_evaluation": trainer.evaluate()}) -@pipeline +@pipeline(enable_cache=False) def fine_tuning_pipeline(): train_dataset, eval_dataset = prepare_data() train_model(train_dataset, eval_dataset) if __name__ == "__main__": - fine_tuning_pipeline() + wandb_settings = WandbExperimentTrackerSettings(tags=["distilbert", "imdb"]) + fine_tuning_pipeline.with_options(settings={"experiment_tracker": wandb_settings})() ``` -For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor.WandbExperimentTrackerSettings). +For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor.WandbExperimentTrackerSettings). ================================================== @@ -7589,27 +7498,31 @@ For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integr ### Vertex AI Experiment Tracker Overview -The Vertex AI Experiment Tracker is a component of the ZenML integration with Vertex AI, designed for logging and visualizing experiment data from machine learning pipelines. It leverages the Vertex AI tracking service to manage models, parameters, and metrics. +The Vertex AI Experiment Tracker is a component of the ZenML integration that utilizes the Vertex AI tracking service for logging and visualizing pipeline step information (models, parameters, metrics). It is ideal for iterative ML experimentation and can also be used for automated pipeline runs. #### Use Cases -- Ideal for iterative ML experimentation and transitioning to production workflows. -- Suitable if already using Vertex AI for experiment tracking or seeking a visually interactive solution within the Google Cloud ecosystem. -- Not recommended for those unfamiliar with Vertex AI or using other cloud providers. +- **Continuity**: For users already employing Vertex AI for experiment tracking and transitioning to MLOps workflows with ZenML. +- **Visualization**: For those seeking an interactive way to navigate results from ZenML pipeline runs. +- **Integration**: For building ML workflows within the Google Cloud ecosystem. -#### Configuration -To set up the Vertex AI Experiment Tracker, install the GCP ZenML integration: +Consider other Experiment Tracker flavors if you are unfamiliar with Vertex AI or using different cloud providers. + +### Configuration + +To set up the Vertex AI Experiment Tracker, install the GCP integration: ```shell zenml integration install gcp -y ``` -**Configuration Options:** -- `project`: GCP project name (optional). -- `location`: GCP location (defaults to `us-central1`). -- `staging_bucket`: GCS bucket for staging artifacts (optional). -- `service_account_path`: Path to service account JSON file (optional). +#### Configuration Options +- **project**: GCP project name (optional, inferred if None). +- **location**: GCP location for experiments (defaults to us-central1). +- **staging_bucket**: GCS bucket for staging artifacts (format: gs://...). +- **service_account_path**: Path to the service account credential JSON file (optional). + +Register the tracker: -**Registering the Tracker:** ```shell zenml experiment-tracker register vertex_experiment_tracker \ --flavor=vertex \ @@ -7620,17 +7533,19 @@ zenml experiment-tracker register vertex_experiment_tracker \ zenml stack register custom_stack -e vertex_experiment_tracker ... --set ``` -#### Authentication -Authentication is necessary for using the Vertex AI Experiment Tracker. Options include: +### Authentication Methods -1. **Implicit Authentication**: Quick setup for local development using `gcloud` CLI. -2. **GCP Service Connector**: Recommended for production, allowing secure and reusable credentials. -3. **GCP Credentials**: Use a service account key stored in a ZenML secret. +1. **Implicit Authentication**: Quick local setup using `gcloud auth login`. Not recommended for production. + +2. **GCP Service Connector (Recommended)**: Provides auto-configuration and security for long-lived credentials. Register using: -**Example for GCP Service Connector:** -```shell +```sh zenml service-connector register --type gcp -i +``` +After setting up the connector, register the tracker: + +```shell zenml experiment-tracker register \ --flavor=vertex \ --project= \ @@ -7640,55 +7555,91 @@ zenml experiment-tracker register \ zenml experiment-tracker connect --connector ``` -#### Usage -To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator. +3. **GCP Credentials**: Generate a GCP Service Account Key, store it in a ZenML Secret, and reference it: + +```shell +zenml experiment-tracker register \ + --flavor=vertex \ + --project= \ + --location= \ + --staging_bucket=gs:// \ + --service_account_path=path/to/service_account_key.json +``` + +### Usage + +To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator. Use Vertex AI's logging capabilities as follows: + +#### Example 1: Logging Metrics + +Install the required library: + +```bash +pip install google-cloud-aiplatform[autologging] +``` -**Example 1: Logging Metrics** ```python from google.cloud import aiplatform +class VertexAICallback(tf.keras.callbacks.Callback): + def on_epoch_end(self, epoch, logs=None): + metrics = {key: value for key, value in (logs or {}).items() if isinstance(value, (int, float))} + aiplatform.log_time_series_metrics(metrics=metrics, step=epoch) + @step(experiment_tracker="") -def train_model(...): +def train_model(config, x_train, y_train, x_val, y_val): aiplatform.autolog() - model.fit(..., callbacks=[VertexAICallback()]) + model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=config.epochs, callbacks=[VertexAICallback()]) aiplatform.log_metrics(...) aiplatform.log_params(...) ``` -**Example 2: Uploading TensorBoard Logs** +#### Example 2: Uploading TensorBoard Logs + +Install the required library: + +```bash +pip install google-cloud-aiplatform[tensorboard] +``` + ```python @step(experiment_tracker="") -def train_model(...): - aiplatform.start_upload_tb_log(...) - model.fit(...) +def train_model(config, gcs_path, x_train, y_train, x_val, y_val): + tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=gcs_path, histogram_freq=1) + aiplatform.start_upload_tb_log(tensorboard_experiment_name="experiment_name", logdir=gcs_path) + model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=config.epochs, callbacks=[tensorboard_callback]) aiplatform.end_upload_tb_log() aiplatform.log_metrics(...) aiplatform.log_params(...) ``` -#### Accessing Experiment Tracker UI -Retrieve the URL for the Vertex AI experiment linked to a ZenML run: +#### Experiment Tracker UI + +Retrieve the URL of the Vertex AI experiment linked to a ZenML run: + ```python -tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value +from zenml.client import Client + +client = Client() +tracking_url = client.get_pipeline("").last_run.steps.get("").run_metadata["experiment_tracker_url"].value print(tracking_url) ``` #### Additional Configuration -Use `VertexExperimentTrackerSettings` to specify an experiment name or TensorBoard instance: + +For further configuration, use `VertexExperimentTrackerSettings` to specify an experiment name or TensorBoard instance: + ```python from zenml.integrations.gcp.flavors.vertex_experiment_tracker_flavor import VertexExperimentTrackerSettings -vertexai_settings = VertexExperimentTrackerSettings( - experiment="", - experiment_tensorboard="TENSORBOARD_RESOURCE_NAME" -) +vertexai_settings = VertexExperimentTrackerSettings(experiment="", experiment_tensorboard="TENSORBOARD_RESOURCE_NAME") @step(experiment_tracker="", settings={"experiment_tracker": vertexai_settings}) -def step_one(data: np.ndarray) -> np.ndarray: +def step_one(data): ... ``` -For more detailed configurations, refer to the ZenML documentation on runtime configuration. +For more details, refer to the ZenML documentation on runtime configuration. ================================================== @@ -7696,29 +7647,35 @@ For more detailed configurations, refer to the ZenML documentation on runtime co ### Experiment Trackers in ZenML -**Overview**: Experiment trackers enable tracking of ML experiments by logging detailed information about models, datasets, metrics, and parameters. In ZenML, each pipeline run is treated as an experiment, and results are stored via Experiment Tracker components, linking runs to experiments. +**Overview**: Experiment trackers log detailed information about ML experiments, including models, datasets, metrics, and parameters, enabling users to visualize and compare results across runs. In ZenML, each pipeline run is treated as an experiment, with results stored through Experiment Tracker components. **Key Points**: -- **Integration**: Experiment Tracker is an optional Stack Component that must be registered in your ZenML Stack. ZenML already tracks artifacts through the mandatory Artifact Store. -- **Usability**: While ZenML records artifacts programmatically, Experiment Trackers provide a user-friendly interface for browsing and visualizing logged information. -- **Architecture**: Experiment Trackers fit into the ZenML stack, enhancing the overall functionality with visual features. +- **Integration**: Experiment Trackers are optional stack components that must be registered as part of a ZenML stack. ZenML also tracks artifacts via the mandatory Artifact Store. +- **Usability**: While ZenML records artifacts programmatically, Experiment Trackers provide user-friendly UIs for browsing and visualizing logged information, making them ideal for enhancing ZenML's capabilities. + +**Architecture**: Experiment Trackers fit into the ZenML stack architecture, allowing for integration with various tracking tools. **Available Experiment Tracker Flavors**: -| Tracker | Flavor | Integration | Notes | -|------------------|----------|-------------|-----------------------------------------------------| -| [Comet](comet.md) | `comet` | `comet` | Comet tracking and visualization | -| [MLflow](mlflow.md) | `mlflow` | `mlflow` | MLflow tracking and visualization | -| [Neptune](neptune.md) | `neptune`| `neptune` | Neptune tracking and visualization | -| [Weights & Biases](wandb.md)| `wandb` | `wandb` | Weights & Biases tracking and visualization | -| [Custom Implementation](custom.md)| _custom_| | Custom tracker implementation | +| Tracker | Flavor | Integration | Notes | +|---------|--------|-------------|-------| +| [Comet](comet.md) | `comet` | `comet` | Adds Comet tracking capabilities | +| [MLflow](mlflow.md) | `mlflow` | `mlflow` | Adds MLflow tracking capabilities | +| [Neptune](neptune.md) | `neptune` | `neptune` | Adds Neptune tracking capabilities | +| [Weights & Biases](wandb.md) | `wandb` | `wandb` | Adds Weights & Biases tracking capabilities | +| [Custom Implementation](custom.md) | _custom_ | | Custom tracking solutions | + +**Command to List Flavors**: +```shell +zenml experiment-tracker flavor list +``` **Usage Steps**: 1. Configure and add an Experiment Tracker to your ZenML stack. -2. Enable the tracker for specific pipeline steps using a decorator. -3. Log information (models, metrics, data) explicitly in your steps. -4. Access the Experiment Tracker UI for visualization. +2. Enable the tracker for specific pipeline steps using decorators. +3. Log information (models, metrics, data) explicitly within the steps. +4. Access the Experiment Tracker UI to visualize logged information. -**Code Snippet**: To get the URL of the Experiment Tracker UI for a specific pipeline run step: +**Accessing Experiment Tracker UI**: ```python from zenml.client import Client @@ -7727,7 +7684,9 @@ step = pipeline_run.steps[""] experiment_tracker_url = step.run_metadata["experiment_tracker_url"].value ``` -**Note**: Experiment trackers will automatically mark runs as failed if the corresponding ZenML pipeline step fails. For detailed usage, refer to the documentation for the specific Experiment Tracker flavor you are using. +**Note**: If a ZenML pipeline step fails, the corresponding experiment run will be marked as failed automatically. + +For detailed usage of specific Experiment Tracker flavors, refer to the respective documentation. ================================================== @@ -7735,16 +7694,16 @@ experiment_tracker_url = step.run_metadata["experiment_tracker_url"].value ### Neptune Experiment Tracker Overview -The Neptune Experiment Tracker integrates with [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize data from ZenML pipeline steps, such as models, parameters, and metrics. It is particularly useful during the iterative ML experimentation phase and for production-ready model registries. +The Neptune Experiment Tracker, integrated with ZenML, utilizes [neptune.ai](https://neptune.ai/product/experiment-tracking) for logging and visualizing pipeline information (models, parameters, metrics). It's beneficial for: -#### When to Use Neptune Experiment Tracker -- If you are already using neptune.ai for tracking experiment results and want to integrate it with ZenML. -- If you prefer a visually interactive way to navigate results from ZenML pipeline runs. -- If you want to share logged artifacts and metrics with your team or stakeholders. +- Continuity in tracking experiment results with neptune.ai while adopting MLOps best practices in ZenML. +- Enhanced visualization of results from ZenML pipeline runs. +- Sharing logged artifacts and metrics with teams or stakeholders. -Consider other [Experiment Tracker flavors](./experiment-trackers.md#experiment-tracker-flavors) if you are unfamiliar with neptune.ai. +If unfamiliar with neptune.ai, consider other [Experiment Tracker flavors](./experiment-trackers.md#experiment-tracker-flavors). + +### Deployment -#### Deployment To deploy the Neptune Experiment Tracker, install the integration: ```shell @@ -7752,6 +7711,7 @@ zenml integration install neptune -y ``` **Authentication Methods:** + 1. **ZenML Secret (Recommended)**: Store credentials securely. ```shell zenml secret create neptune_secret --api_token= @@ -7773,13 +7733,10 @@ zenml integration install neptune -y zenml stack register neptune_stack -e neptune_experiment_tracker ... --set ``` -#### Usage -To log information from a ZenML pipeline step: +### Usage -1. Enable the experiment tracker with the `@step` decorator. -2. Fetch the Neptune run object and log data. +To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and fetch the Neptune run object: -Example: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import step @@ -7798,35 +7755,46 @@ def train_model() -> SVC: return model ``` -**Logging Metadata:** +#### Logging Metadata + Use `get_step_context` to log ZenML metadata: + ```python +from zenml import get_step_context + @step(experiment_tracker="neptune_tracker") def my_step(): neptune_run = get_neptune_run() context = get_step_context() - neptune_run["pipeline_metadata"] = stringify_unsupported(context.pipeline_run.get_metadata().dict()) + neptune_run["pipeline_metadata"] = context.pipeline_run.get_metadata().dict() ``` -**Adding Tags:** -Use `NeptuneExperimentTrackerSettings` to add tags: +#### Adding Tags + +Utilize `NeptuneExperimentTrackerSettings` to add tags: + ```python +from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings + neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"}) ``` -#### Neptune UI -Neptune provides a web-based UI to inspect tracked experiments. The URL for the Neptune run is printed in the console upon initialization and can also be found in the metadata tab of any step using the tracker. +### Neptune UI + +Neptune provides a web-based UI for tracking experiments. Each pipeline run is logged as a separate experiment, accessible via the console or the metadata tab of any step using the tracker. + +### Full Code Example + +Here’s a complete example demonstrating the integration: -#### Full Code Example -Here’s a complete example integrating Neptune with ZenML: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run -from zenml import step, pipeline +from zenml import pipeline, step from sklearn.model_selection import train_test_split from sklearn.svm import SVC from zenml.client import Client -@step(experiment_tracker="neptune_experiment_tracker") +@step(experiment_tracker=Client().active_stack.experiment_tracker.name) def train_model() -> SVC: iris = load_iris() model = SVC(kernel="rbf", C=1.0) @@ -7835,7 +7803,7 @@ def train_model() -> SVC: neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} return model -@step(experiment_tracker="neptune_experiment_tracker") +@step(experiment_tracker=Client().active_stack.experiment_tracker.name) def evaluate_model(model: SVC): iris = load_iris() _, X_test, _, y_test = train_test_split(iris.data, iris.target, test_size=0.2) @@ -7853,8 +7821,9 @@ if __name__ == "__main__": ml_pipeline() ``` -#### Further Reading -Refer to [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/) for more details on using this integration. +### Further Reading + +For more details, refer to [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/). ================================================== @@ -7862,38 +7831,41 @@ Refer to [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/) ### Develop a Custom Experiment Tracker -To create a custom experiment tracker in ZenML, follow these steps: +#### Overview +To create a custom experiment tracker in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that the base abstraction for Experiment Trackers is under development, and extensions are currently not recommended. -1. **Create a Class**: Inherit from `BaseExperimentTracker` and implement the required abstract methods. -2. **Configuration Class**: If needed, inherit from `BaseExperimentTrackerConfig` to define configuration parameters. +#### Steps to Build a Custom Experiment Tracker +1. **Create a Tracker Class**: Inherit from `BaseExperimentTracker` and implement the abstract methods. +2. **Configuration Class**: If needed, create a class inheriting from `BaseExperimentTrackerConfig` to define configuration parameters. 3. **Combine Implementation and Configuration**: Inherit from `BaseExperimentTrackerFlavor`. -After implementing your custom tracker, register it using the CLI with dot notation: +#### Registering the Tracker +Use the CLI to register your custom flavor with the following command, ensuring to use dot notation for the flavor class: ```shell zenml experiment-tracker flavor register ``` -For example, if your flavor class is in `flavors/my_flavor.py`, register it as follows: +For example, if your flavor class is in `flavors/my_flavor.py`: ```shell zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFlavor ``` -**Important Notes**: -- Ensure ZenML is initialized at the root of your repository to avoid resolution issues. -- List available flavors with: +#### Best Practices +- Initialize ZenML at the root of your repository using `zenml init` to avoid resolution issues. +- After registration, verify your flavor is available with: ```shell zenml experiment-tracker flavor list ``` -### Key Components in ZenML Workflow -- **CustomExperimentTrackerFlavor**: Used during flavor creation via CLI. -- **CustomExperimentTrackerConfig**: Validates user-provided values when registering/updating a stack component. -- **CustomExperimentTracker**: Activated when the component is in use, allowing separation of configuration from implementation. +#### Important Notes +- The **CustomExperimentTrackerFlavor** class is used during flavor creation. +- The **CustomExperimentTrackerConfig** class validates user input during stack component registration. +- The **CustomExperimentTracker** is utilized when the component is in use, allowing separation of configuration from implementation. -This design allows for registering flavors and components even if their dependencies are not installed locally, provided the flavor and config classes are implemented separately. +This design enables registration of flavors and components without requiring all dependencies to be installed locally. ================================================== @@ -7901,72 +7873,55 @@ This design allows for registering flavors and components even if their dependen ### MLflow Experiment Tracker Overview -The MLflow Experiment Tracker, integrated with ZenML, utilizes the MLflow tracking service to log and visualize pipeline information such as models, parameters, and metrics. +The MLflow Experiment Tracker, integrated with ZenML, utilizes the MLflow tracking service to log and visualize pipeline step information (models, parameters, metrics). #### Use Cases -- **Continuity**: For users already utilizing MLflow for experiment tracking while transitioning to MLOps with ZenML. -- **Visualization**: To enhance the visual navigation of results from ZenML pipeline runs. -- **Shared Services**: For teams with an existing MLflow Tracking service to share logged artifacts and metrics. +- Continue using MLflow for tracking as you adopt MLOps practices with ZenML. +- Gain a visually interactive way to navigate results from ZenML pipeline runs. +- Connect to an existing shared MLflow Tracking service for artifact and metric sharing. -#### Configuration -To set up the MLflow Experiment Tracker, install the integration: +#### Configuration Steps +1. **Install MLflow Integration**: + ```shell + zenml integration install mlflow -y + ``` -```shell -zenml integration install mlflow -y -``` +2. **Deployment Scenarios**: + - **Localhost**: Requires a local Artifact Store, suitable for local runs only. + ```shell + zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow + zenml stack register custom_stack -e mlflow_experiment_tracker ... --set + ``` + - **Remote Tracking Server**: Requires authentication parameters. + - **Databricks**: Requires authentication parameters specific to Databricks. -**Deployment Scenarios**: -1. **Localhost**: Requires a local Artifact Store. Not suitable for collaborative environments. +#### Authentication Methods +- **Basic Authentication** (not recommended for production): ```shell - zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow - zenml stack register custom_stack -e mlflow_experiment_tracker ... --set + zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow --tracking_uri= --tracking_token= + ``` +- **ZenML Secret (Recommended)**: Store credentials securely. + ```shell + zenml secret create mlflow_secret --username= --password= + zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... ``` - -2. **Remote Tracking**: Requires a deployed MLflow Tracking Server with authentication parameters. - - **Authentication Methods**: - - `tracking_uri`: URL of the MLflow server. - - `tracking_username` and `tracking_password` or `tracking_token` for authentication. - - Optional: `tracking_insecure_tls` to skip SSL verification. - - For Databricks, set `tracking_uri` to `"databricks"` and specify `databricks_host`. - -**Basic Authentication Example**: -```shell -zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ - --tracking_uri= --tracking_token= -``` - -**ZenML Secret (Recommended)**: -Create a secret for secure credential storage: -```shell -zenml secret create mlflow_secret --username= --password= -``` -Use the secret in the tracker configuration: -```shell -zenml experiment-tracker register mlflow \ - --flavor=mlflow \ - --tracking_username={{mlflow_secret.username}} \ - --tracking_password={{mlflow_secret.password}} \ - ... -``` #### Usage -To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and utilize MLflow's logging capabilities: +To log information from a ZenML pipeline step: ```python import mlflow @step(experiment_tracker="") -def tf_trainer(x_train, y_train): +def tf_trainer(x_train: np.ndarray, y_train: np.ndarray) -> tf.keras.Model: mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) return model ``` - -**Dynamic Tracker Usage**: +You can dynamically reference the active stack's experiment tracker: ```python from zenml.client import Client - experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) @@ -7975,32 +7930,29 @@ def tf_trainer(...): ``` #### MLflow UI -Access the MLflow UI for detailed experiment tracking: +Access the MLflow UI to view tracked experiments. The URL can be retrieved from the step metadata: ```python -from zenml.client import Client - -last_run = client.get_pipeline("").last_run -tracking_url = last_run.get_step("").run_metadata["experiment_tracker_url"].value +tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` -For local MLflow, start the UI with: +For local MLflow, start the UI: ```bash mlflow ui --backend-store-uri ``` #### Additional Configuration -For advanced settings, use `MLFlowExperimentTrackerSettings`: +You can pass `MLFlowExperimentTrackerSettings` for nested runs or additional tags: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step(experiment_tracker="", settings={"experiment_tracker": mlflow_settings}) -def step_one(data): +def step_one(data: np.ndarray) -> np.ndarray: ... ``` -For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor.MLFlowExperimentTrackerSettings). +For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.experiment_trackers.mlflow_experiment_tracker). ================================================== @@ -8008,47 +7960,45 @@ For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/late ### Comet Experiment Tracker Overview -The Comet Experiment Tracker, integrated with ZenML, utilizes the Comet platform to log and visualize pipeline step information (models, parameters, metrics). - -#### Use Cases -- Continue tracking experiments with Comet while adopting MLOps workflows in ZenML. -- Gain interactive visualizations of ZenML pipeline results. -- Share logged artifacts and metrics with teams or stakeholders. - -#### Deployment -To deploy the Comet Experiment Tracker, install the integration: +The Comet Experiment Tracker integrates with ZenML to log and visualize experiment data from machine learning pipelines using the Comet platform. It is beneficial for tracking results during ML experimentation and can also be used in production workflows. -```bash -zenml integration install comet -y -``` +### When to Use Comet Experiment Tracker +- If you are already using Comet for experiment tracking and want to continue as you adopt MLOps practices with ZenML. +- If you prefer a visually interactive way to navigate results from ZenML pipelines. +- If you want to share logged artifacts and metrics with your team or stakeholders. -**Authentication Methods:** -1. **ZenML Secret (Recommended)**: Store credentials securely. +### Deployment Steps +1. **Install Comet Integration**: ```bash - zenml secret create comet_secret \ - --workspace= \ - --project_name= \ - --api_key= + zenml integration install comet -y ``` - Configure the tracker: +2. **Configure Authentication**: + - **ZenML Secret (Recommended)**: Store credentials securely. + ```bash + zenml secret create comet_secret \ + --workspace= \ + --project_name= \ + --api_key= + ``` + - **Basic Authentication**: Directly configure credentials (not recommended for production). + ```bash + zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ + --workspace= --project_name= --api_key= + ``` + +3. **Register Experiment Tracker**: ```bash zenml experiment-tracker register comet_tracker \ --flavor=comet \ --workspace={{comet_secret.workspace}} \ --project_name={{comet_secret.project_name}} \ --api_key={{comet_secret.api_key}} + zenml stack register custom_stack -e comet_experiment_tracker ... --set ``` -2. **Basic Authentication**: Directly set credentials (not recommended for production). - ```bash - zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ - --workspace= --project_name= --api_key= - ``` - -#### Usage -To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator: - +### Usage +To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and use Comet's logging capabilities: ```python from zenml.client import Client @@ -8060,17 +8010,14 @@ def my_step(): experiment_tracker.experiment.log_model(...) ``` -#### Comet UI -Each ZenML step using Comet creates a separate experiment, viewable in the Comet UI. Access the experiment URL via the step's metadata: - +### Comet UI +Each ZenML step using Comet creates a separate experiment viewable in the Comet UI. You can find the experiment URL in the step's metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` -#### Full Code Example -Here's a simplified example combining the key components: - +### Full Code Example ```python from comet_ml.integration.sklearn import log_model import numpy as np @@ -8086,7 +8033,8 @@ experiment_tracker = Client().active_stack.experiment_tracker @step def load_data(): - return load_iris().data, load_iris().target + iris = load_iris() + return iris.data, iris.target @step def preprocess_data(X, y): @@ -8115,9 +8063,8 @@ if __name__ == "__main__": iris_classification_pipeline() ``` -#### Additional Configuration -You can add tags and other settings using `CometExperimentTrackerSettings`: - +### Additional Configuration +You can provide additional tags for your experiments using `CometExperimentTrackerSettings`: ```python comet_settings = CometExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": comet_settings}) @@ -8125,7 +8072,7 @@ def my_step(): ... ``` -For detailed attributes and configurations, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-comet/#zenml.integrations.comet.flavors.comet_experiment_tracker_flavor.CometExperimentTrackerSettings). +For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-comet/#zenml.integrations.comet.flavors.comet_experiment_tracker_flavor.CometExperimentTrackerSettings). ================================================== @@ -8133,43 +8080,49 @@ For detailed attributes and configurations, refer to the [SDK docs](https://sdkd # Model Registries -Model registries are centralized storage solutions for managing and tracking machine learning models throughout their development and deployment stages. They facilitate version control and reproducibility by storing metadata such as version, configuration, and metrics. In ZenML, model registries are Stack Components that streamline the retrieval, loading, and deployment of trained models, while also providing information on the training pipeline. +Model registries are centralized solutions for managing and tracking machine learning models throughout their development and deployment stages. They enable version control, configuration tracking, and reproducibility by storing metadata such as version, configuration, and performance metrics. + +In ZenML, model registries are Stack Components that facilitate the retrieval, loading, and deployment of trained models, along with information on the training pipeline. ### Key Concepts -- **RegisteredModel**: A logical grouping of models for tracking different versions. It includes the model's name, description, and tags. +- **RegisteredModel**: A logical grouping of models to track different versions. It includes the model's name, description, and tags. -- **RegistryModelVersion**: A specific version of a model identified by a unique version number. It contains metadata, including the model artifact reference, pipeline name, pipeline run ID, and step name. +- **RegistryModelVersion**: A specific version of a model, identified by a unique version number. It contains metadata about the model, including its name, description, tags, metrics, and references to the pipeline name, run ID, and step name. -- **ModelVersionStage**: Represents the lifecycle state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`. +- **ModelVersionStage**: Represents the state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`. This tracks the lifecycle of the model. -### When to Use Model Registries +### When to Use -ZenML's Artifact Store manages pipeline artifacts programmatically, but model registries provide a visual interface for managing model metadata, especially with remote orchestrators. They are ideal for centralized model management, retrieval, loading, and deployment. +ZenML's Artifact Store manages pipeline artifacts programmatically, but model registries provide a visual interface for managing model metadata, especially useful with remote orchestrators. They simplify the retrieval, loading, and deployment of models, making them ideal for centralized model state management. -### Integration in ZenML Stack +### Model Registry Integration -Model registries are optional components that work alongside experiment trackers. They can be integrated with various flavors: +Model registries are optional components in the ZenML stack and require an experiment tracker. If not using an experiment tracker, models can still be stored, but retrieval must be manual. -| Model Registry | Flavor | Integration | Notes | -|------------------------|--------------|-------------|-----------------------------------------| -| [MLflow](mlflow.md) | `mlflow` | `mlflow` | Add MLflow as Model Registry to your stack | -| [Custom Implementation](custom.md) | _custom_ | | Custom model registry implementation | +#### Available Flavors -To view available flavors, use: +| Model Registry | Flavor | Integration | Notes | +|----------------|--------|-------------|-------| +| [MLflow](mlflow.md) | `mlflow` | `mlflow` | Integrate MLflow as Model Registry | +| [Custom Implementation](custom.md) | _custom_ | | Custom options available | + +To list available flavors, use: ```shell zenml model-registry flavor list ``` ### Usage -To utilize model registries in ZenML, you must first register a model registry in your stack that matches your experiment tracker flavor. Models can be registered using: - -1. A built-in pipeline step. -2. ZenML CLI. -3. Model registry UI. +To use model registries: +1. Register a model registry in your stack, matching the flavor of your experiment tracker. +2. Register trained models via: + - Built-in pipeline step + - ZenML CLI + - Model registry UI +3. Retrieve and load models for deployment or experimentation. -Once registered, models can be retrieved and loaded for deployment or further experimentation. For more details on fetching models, refer to the documentation on [fetching runs](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md). +For further details, refer to the documentation on [fetching runs](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md). ================================================== @@ -8181,24 +8134,22 @@ Once registered, models can be retrieved and loaded for deployment or further ex This documentation provides guidance on developing a custom model registry in ZenML. Familiarity with ZenML's component flavor concepts is recommended before proceeding. #### Important Notes -- The `ModelRegistry` component is new and may undergo API changes. -- Feedback on the base abstraction is encouraged via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues/new/choose). +- The model registry stack component is new and may undergo API changes. Feedback on the base abstraction is encouraged via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues/new/choose). #### Base Abstraction -The `BaseModelRegistry` is the abstract class for creating custom model registries. It provides a basic interface for model registration and versioning: +The `BaseModelRegistry` is the abstract class for creating a custom model registry. It provides a basic interface for model registration and retrieval. + +**Key Components:** ```python from abc import ABC, abstractmethod -from typing import Any, Dict, List, Optional, cast -from zenml.stack import StackComponent, StackComponentConfig +from typing import Any, Dict, List, Optional class BaseModelRegistryConfig(StackComponentConfig): """Base config for model registries.""" class BaseModelRegistry(StackComponent, ABC): - @property - def config(self) -> BaseModelRegistryConfig: - return cast(BaseModelRegistryConfig, self._config) + """Base class for ZenML model registries.""" @abstractmethod def register_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: @@ -8206,20 +8157,21 @@ class BaseModelRegistry(StackComponent, ABC): @abstractmethod def delete_model(self, name: str) -> None: - """Deletes a model.""" + """Deletes a registered model.""" @abstractmethod def update_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: - """Updates a model.""" + """Updates a registered model.""" @abstractmethod def get_model(self, name: str) -> RegisteredModel: - """Gets a model.""" + """Gets a registered model.""" @abstractmethod def list_models(self, name: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> List[RegisteredModel]: - """Lists models.""" - + """Lists registered models.""" + + # Model Version Methods @abstractmethod def register_model_version(self, name: str, version: Optional[str] = None, **kwargs: Any) -> RegistryModelVersion: """Registers a model version.""" @@ -8250,61 +8202,59 @@ class BaseModelRegistry(StackComponent, ABC): ``` #### Creating a Custom Model Registry -To create a custom model registry: -1. Understand core concepts [here](./model-registries.md#model-registry-concepts-and-terminology). -2. Inherit from `BaseModelRegistry` and implement the abstract methods. -3. Create a `ModelRegistryConfig` class inheriting from `BaseModelRegistryConfig`. -4. Combine the implementation and configuration by inheriting from `BaseModelRegistryFlavor` and provide a name. - -Register your custom model registry using the CLI: +To create a custom model registry flavor: +1. Understand core concepts of model registries. +2. Inherit from `BaseModelRegistry` and implement abstract methods. +3. Create a `ModelRegistryConfig` class extending `BaseModelRegistryConfig`. +4. Combine implementation and configuration by inheriting from `BaseModelRegistryFlavor`. +**Registering the Flavor:** ```shell zenml model-registry flavor register ``` #### Workflow Integration -- **CustomModelRegistryFlavor** is used during flavor creation via CLI. +- **CustomModelRegistryFlavor** is used during flavor creation. - **CustomModelRegistryConfig** is used for validation during stack component registration. -- **CustomModelRegistry** is utilized when the component is in use, allowing separation of configuration from implementation. +- **CustomModelRegistry** is utilized when the component is in use. +#### Example For a complete implementation example, refer to the [MLFlowModelRegistry](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/model-registries/mlflow.md === -# MLflow Model Registry Overview +### MLflow Model Registry Overview -**MLflow** is a tool for tracking experiments, managing models, and deploying them. It includes a **Model Registry** for managing and tracking ML models and artifacts, providing a user interface for browsing. +MLflow is a tool for tracking experiments, managing models, and deploying them across environments. ZenML integrates with MLflow, providing an Experiment Tracker and Model Deployer. The MLflow model registry allows for managing and tracking ML models and their artifacts, offering a user interface for browsing. -## Use Cases -The MLflow model registry is useful for: -- Tracking different model versions during development and deployment. -- Managing model deployments across various environments. -- Monitoring and comparing model performance over time. -- Simplifying model deployment to production or staging environments. +#### Use Cases +- Track different model versions during development and deployment. +- Manage deployments across various environments. +- Monitor and compare model performance over time. +- Simplify model deployment processes. -## Deployment Steps -To deploy the MLflow model registry, install the MLflow integration: +#### Installation +To use the MLflow model registry, install the MLflow integration: ```shell zenml integration install mlflow -y ``` -Register the model registry component in your stack: +Register the MLflow model registry component in your stack: ```shell zenml model-registry register mlflow_model_registry --flavor=mlflow zenml stack register custom_stack -r mlflow_model_registry ... --set ``` -**Note:** The model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version **2.2.1** or higher due to a critical vulnerability. +**Note:** The MLflow model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version 2.2.1 or higher due to a critical vulnerability in older versions. -## Usage -You can use the MLflow model registry in ZenML pipelines or via the CLI. +#### Usage +You can register models in ZenML pipelines or manually via the CLI. -### Register Models in a Pipeline -Use the `mlflow_register_model_step` to register a model: +**Registering Models in a Pipeline:** ```python from zenml import pipeline @@ -8313,22 +8263,18 @@ from zenml.integrations.mlflow.steps.mlflow_registry import mlflow_register_mode @pipeline def mlflow_registry_training_pipeline(): model = ... - mlflow_register_model_step( - model=model, - name="tensorflow-mnist-model", - ) + mlflow_register_model_step(model=model, name="tensorflow-mnist-model") ``` **Parameters for `mlflow_register_model_step`:** - `name`: Required model name. - `version`: Model version. -- `trained_model_name`: Artifact name in MLflow. +- `trained_model_name`: Name of the model artifact in MLflow. - `model_source_uri`: Path to the model. - `description`: Model version description. -- `metadata`: List of metadata for the model version. +- `metadata`: Metadata for the model version. -### Register Models via CLI -To manually register a model version: +**Registering Models via CLI:** ```shell zenml model-registry models register-version Tensorflow-model \ @@ -8340,34 +8286,33 @@ zenml model-registry models register-version Tensorflow-model \ --zenml-step-name="trainer" ``` -### Interact with Registered Models -List all registered models: +#### Interacting with Registered Models +- List all registered models: ```shell zenml model-registry models list ``` -List versions of a specific model: +- List versions of a specific model: ```shell zenml model-registry models list-versions tensorflow-mnist-model ``` -Get details of a specific model version: +- Get details of a specific model version: ```shell zenml model-registry models get-version tensorflow-mnist-model -v 1 ``` -### Deleting Models -To delete a registered model or a specific version: +- Delete a registered model or specific version: ```shell zenml model-registry models delete REGISTERED_MODEL_NAME zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION ``` -For more details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). +For further details, refer to the [MLflow model deployer documentation](../model-deployers/mlflow.md#deploy-from-model-registry) and the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== @@ -8375,14 +8320,14 @@ For more details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.i ### Local Docker Orchestrator -The Local Docker Orchestrator is a built-in feature of ZenML that allows you to run pipelines locally within Docker containers. +The Local Docker Orchestrator is a built-in ZenML orchestrator that runs pipelines locally using Docker. #### When to Use - For running pipeline steps in isolated local environments. -- For debugging pipeline issues without relying on remote infrastructure. +- For debugging pipeline issues without incurring costs for remote infrastructure. #### Deployment -Ensure Docker is installed and running. To register and activate the local Docker orchestrator in your active stack, use: +Ensure Docker is installed and running. Register the orchestrator and activate a stack with the following commands: ```shell zenml orchestrator register --flavor=local_docker @@ -8390,14 +8335,14 @@ zenml stack register -o ... --set ``` #### Running a Pipeline -Execute any ZenML pipeline with the orchestrator: +Execute any ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` #### Additional Configuration -You can customize the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for available attributes. +You can customize the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for available attributes and [this page](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for specifying settings. Example of specifying CPU count (Windows only): @@ -8410,7 +8355,9 @@ def return_one() -> int: return 1 settings = { - "orchestrator": LocalDockerOrchestratorSettings(run_args={"cpu_count": 3}) + "orchestrator": LocalDockerOrchestratorSettings( + run_args={"cpu_count": 3} + ) } @pipeline(settings=settings) @@ -8419,38 +8366,41 @@ def simple_pipeline(): ``` #### Enabling CUDA for GPU -To run steps on GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full acceleration. +For GPU support, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA and customize settings for GPU acceleration. ================================================== === File: docs/book/component-guide/orchestrators/lightning.md === -### Lightning AI Orchestrator Overview +# Lightning AI Orchestrator Overview -The **Lightning AI Orchestrator**, integrated with ZenML, enables the execution of machine learning pipelines on Lightning AI's scalable infrastructure. It is designed for remote ZenML deployments and should not be used in local scenarios. +## Description +The Lightning AI Orchestrator, integrated with ZenML, enables running pipelines on Lightning AI's scalable infrastructure. It is designed for remote ZenML deployments and should not be used locally. -#### When to Use -- For quick GPU instance execution of pipelines. -- If already utilizing Lightning AI for machine learning. -- To leverage managed infrastructure for deployment and scaling. -- To benefit from Lightning AI's optimizations for ML workloads. +## When to Use +- Fast deployment on GPU instances. +- Existing use of Lightning AI for ML projects. +- Need for managed infrastructure for ML workflows. +- Simplified deployment and scaling of ML applications. +- Access to Lightning AI's optimizations. -#### Deployment Requirements +## Deployment Requirements - A Lightning AI account with credentials. -- No additional infrastructure deployment is necessary. +- No additional infrastructure deployment needed. -#### Functionality -- Archives the ZenML repository and uploads it to Lightning AI when running a pipeline. -- Uses `lightning-sdk` to create a studio and run commands via `studio.run()`. -- Supports asynchronous execution, allowing background runs with status checks in the ZenML Dashboard or Lightning AI Studio. -- Custom commands can be specified for environment setup. +## Functionality +- Archives the ZenML repository and uploads it to Lightning AI Studio. +- Uses `lightning-sdk` to create a new studio and run commands for environment setup. +- Supports async mode for background execution and status checking via ZenML Dashboard. +- Allows custom commands for environment setup before pipeline execution. +- Supports both CPU and GPU machine types. -#### Setup Instructions +## Setup Instructions 1. Install the ZenML Lightning integration: ```shell zenml integration install lightning ``` -2. Ensure a remote artifact store is part of your stack. +2. Configure a remote artifact store. 3. Obtain Lightning AI credentials: - `LIGHTNING_USER_ID` - `LIGHTNING_API_KEY` @@ -8466,12 +8416,13 @@ The **Lightning AI Orchestrator**, integrated with ZenML, enables the execution --teamspace= \ # optional --organization= # optional ``` -5. Register and activate a stack: + +5. Activate the stack: ```bash zenml stack register lightning_stack -o lightning_orchestrator ... --set ``` -#### Pipeline Configuration +## Pipeline Configuration Configure the orchestrator at the pipeline level: ```python from zenml.integrations.lightning.flavors.lightning_orchestrator_flavor import LightningOrchestratorSettings @@ -8488,14 +8439,14 @@ def my_pipeline(): ... ``` -#### Running the Pipeline -Execute the pipeline using: +## Running Pipelines +Execute a ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` -#### Monitoring and Management -Monitor running applications through the Lightning AI UI. To retrieve the UI URL for a specific run: +## Monitoring +Monitor applications via the Lightning AI UI. Retrieve the UI URL for a specific pipeline run: ```python from zenml.client import Client @@ -8503,68 +8454,72 @@ pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` -#### Additional Configuration -You can further customize the orchestrator settings at the pipeline or step level. For GPU usage, specify a GPU-enabled machine type: +## Additional Configuration +Customize execution settings: ```python lightning_settings = LightningOrchestratorSettings( - machine_type="gpu" # or specific types like `A10G` + main_studio_name="my_studio", + machine_type="gpu", # Specify GPU type if needed ) ``` -Refer to Lightning AI's documentation for available GPU types. -### Important Notes -- Ensure `zenml init` is run in the repository root before executing pipelines. -- The `custom_commands` attribute allows pre-execution commands for environment setup. +Settings can be applied at both pipeline and step levels. + +## GPU Usage +For GPU execution, specify a GPU-enabled machine type: +```python +lightning_settings = LightningOrchestratorSettings( + machine_type="A10G" # Example GPU type +) +``` +Refer to [Lightning AI documentation](https://lightning.ai/docs/overview/studios/change-gpus) for available GPU types. ================================================== === File: docs/book/component-guide/orchestrators/hyperai.md === -# HyperAI Orchestrator +# HyperAI Orchestrator Summary -The HyperAI orchestrator is designed for deploying pipelines on HyperAI instances, a cloud compute platform for AI. It is intended for use within a remote ZenML deployment and is not suitable for local deployments. +The **HyperAI Orchestrator** is designed for deploying pipelines on HyperAI instances within a remote ZenML deployment scenario. It is not suitable for local ZenML deployments. -## When to Use +### When to Use - For managed pipeline execution. - If you are a HyperAI customer. -## Prerequisites -1. A running HyperAI instance accessible via the internet with SSH key-based access. -2. A recent version of Docker, including Docker Compose. -3. The appropriate [NVIDIA Driver](https://www.nvidia.com/en-us/drivers/unix/) installed on the HyperAI instance. -4. The [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) installed and configured (optional for GPU usage). - -## How It Works -The orchestrator utilizes Docker Compose to create and execute a Docker Compose file for each ZenML pipeline step, ensuring that steps run only if their upstream steps succeed. It can connect to a container registry for Docker image transfers. +### Prerequisites +- A running, internet-accessible HyperAI instance with SSH key-based access. +- A recent version of Docker with Docker Compose. +- NVIDIA Driver and NVIDIA Container Toolkit installed (optional for GPU use). -## Scheduled Pipelines -The orchestrator supports: -- **Cron expressions** via `cron_expression` for recurring runs (requires `crontab`). -- **Scheduled runs** via `run_once_start_time` for one-time runs (requires `at`). +### Functionality +- Utilizes Docker Compose to create and execute pipelines. +- Each ZenML pipeline step is defined as a service in a generated Docker Compose file. +- Supports scheduled pipelines using: + - **Cron expressions** for recurring runs. + - **Run once** scheduling for specific time execution. -## Deployment -To deploy the HyperAI orchestrator: -1. Configure a HyperAI Service Connector in ZenML with credentials for the HyperAI instance. -2. Ensure the orchestrator is part of a stack that includes a container registry and image builder. +### Deployment Steps +1. **Configure a HyperAI Service Connector** in ZenML: + ```shell + zenml service-connector register --type=hyperai --auth-method=rsa-key --base64_ssh_key= --hostnames=, --username= + ``` + - Hostnames can be DNS names or IP addresses. -### Example: Registering a Service Connector -```shell -zenml service-connector register --type=hyperai --auth-method=rsa-key --base64_ssh_key= --hostnames=, --username= -``` +2. **Register the Orchestrator**: + ```shell + zenml orchestrator register --flavor=hyperai + zenml stack register -o ... --set + ``` -### Example: Registering the Orchestrator -```shell -zenml orchestrator register --flavor=hyperai -zenml stack register -o ... --set -``` +3. **Run a ZenML Pipeline**: + ```shell + python file_that_runs_a_zenml_pipeline.py + ``` -### Running a ZenML Pipeline -```shell -python file_that_runs_a_zenml_pipeline.py -``` +### GPU Configuration +To utilize GPU acceleration, follow specific instructions to enable CUDA settings. -## Enabling CUDA for GPU -To utilize GPU acceleration, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) for CUDA configuration. +This summary encapsulates the key technical aspects of the HyperAI Orchestrator, ensuring clarity and conciseness for effective understanding. ================================================== @@ -8572,26 +8527,26 @@ To utilize GPU acceleration, follow [these instructions](../../how-to/pipeline-d ### Airflow Orchestrator for ZenML Pipelines -ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each ZenML step runs in a separate Docker container managed by Airflow. +ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each step in a ZenML pipeline runs in a separate Docker container managed by Airflow. -#### When to Use Airflow Orchestrator +#### When to Use Airflow - Proven production-grade orchestrator. -- Existing use of Airflow. -- Local pipeline execution. -- Willingness to deploy and maintain Airflow. +- Already using Airflow. +- Need to run pipelines locally. +- Willing to deploy and maintain Airflow. #### Deployment Options -- **Local Deployment**: No additional setup required. -- **Remote Deployment**: Requires a remote ZenML deployment. - - Use ZenML GCP Terraform module for Google Cloud Composer. - - Managed deployments: Google Cloud Composer, Amazon MWAA, Astronomer. - - Manual deployment: Refer to [Airflow docs](https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html). +- **Local:** No additional setup required. +- **Remote:** Options include: + - ZenML GCP Terraform module with Google Cloud Composer. + - Managed services like Google Cloud Composer, Amazon MWAA, or Astronomer. + - Manual deployment (refer to [Airflow docs](https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html)). -**Dependencies for Remote Deployment**: -- Install `pydantic~=2.7.1`. -- Install either `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes`. +**Required Python Packages for Remote Deployment:** +- `pydantic~=2.7.1` +- `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes` (based on the operator used). -#### Usage Steps +#### Setup Instructions 1. Install ZenML Airflow integration: ```shell zenml integration install airflow @@ -8603,43 +8558,42 @@ ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestrat zenml stack register -o ... --set ``` -#### Local Setup +**Local Airflow Server Setup:** - Create a virtual environment: ```bash python -m venv airflow_server_environment source airflow_server_environment/bin/activate pip install "apache-airflow==2.4.0" "apache-airflow-providers-docker<3.8.0" "pydantic~=2.7.1" ``` -- Set environment variables: - - `AIRFLOW_HOME`: Default is `~/airflow`. - - `AIRFLOW__CORE__DAGS_FOLDER`: Default is `/dags`. - - `AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL`: Default is 30 seconds. - -**MacOS Users**: Set `no_proxy` to prevent crashes: -```bash -export no_proxy=* -``` - +- Set environment variables (optional): + ```bash + export AIRFLOW_HOME=~/airflow + export AIRFLOW__CORE__DAGS_FOLDER=/dags + export AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL=30 + ``` - Start the local Airflow server: ```bash airflow standalone ``` -- Run the ZenML pipeline: - ```shell - python file_that_runs_a_zenml_pipeline.py - ``` -Copy the generated `.zip` file to the Airflow DAGs directory or configure ZenML to do this automatically: -```bash +#### Running a Pipeline +Run the pipeline script: +```shell +python file_that_runs_a_zenml_pipeline.py +``` +This generates a `.zip` file representing the ZenML pipeline for Airflow. Copy the `.zip` file to the Airflow DAGs directory. + +To automate copying: +```shell zenml orchestrator update --dag_output_dir= ``` #### Remote Deployment Considerations -- Requires remote ZenML server, Airflow server, remote artifact store, and remote container registry. -- Running `pipeline.run()` creates a `.zip` file for Airflow but does not execute it directly. +- Requires a remote ZenML server, deployed Airflow server, remote artifact store, and remote container registry. +- Running `pipeline.run()` creates a `.zip` file but does not execute the pipeline directly. #### Scheduling Pipelines -Set schedules in the past: +Schedule pipeline runs in Airflow: ```python from datetime import datetime, timedelta from zenml.pipelines import Schedule @@ -8656,15 +8610,15 @@ scheduled_pipeline() ``` #### Airflow UI -Access the UI at [http://localhost:8080](http://localhost:8080). Default username is `admin`. Password can be found in `/standalone_admin_password.txt`. +Access the Airflow UI at [http://localhost:8080](http://localhost:8080) for monitoring pipeline runs. Default credentials: username `admin`, password found in `/standalone_admin_password.txt`. #### Additional Configuration -Use `AirflowOrchestratorSettings` for further customization. For GPU support, follow specific instructions to enable CUDA. +Customize the Airflow orchestrator with `AirflowOrchestratorSettings` during pipeline definition. For GPU support, follow specific instructions to enable CUDA. -#### Airflow Operators +#### Using Different Airflow Operators ZenML supports: - `DockerOperator`: Runs Docker images on the same machine. -- `KubernetesPodOperator`: Runs Docker images in a Kubernetes pod. +- `KubernetesPodOperator`: Runs Docker images in a Kubernetes cluster. Specify the operator: ```python @@ -8676,9 +8630,8 @@ airflow_settings = AirflowOrchestratorSettings( ) ``` -**Custom Operators**: Specify any operator by its import path. - -**Custom DAG Generator**: Provide a custom DAG generator file for more control over DAG creation. +#### Custom Operators and DAG Generators +You can specify custom operators and provide a custom DAG generator file for more control over DAG creation. Ensure it contains required classes and constants. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-airflow/#zenml.integrations.airflow.orchestrators.airflow_orchestrator.AirflowOrchestrator). @@ -8686,131 +8639,110 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr === File: docs/book/component-guide/orchestrators/sagemaker.md === -### AWS Sagemaker Orchestrator Overview +# AWS Sagemaker Orchestrator Summary -**Description**: The AWS Sagemaker Orchestrator integrates with ZenML to facilitate serverless ML workflows on AWS, enabling efficient, production-ready pipeline management. - -**Usage Context**: This component is intended for remote ZenML deployments only; local deployments may cause issues. +## Overview +The **Sagemaker Orchestrator** integrates with [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) to provide a serverless ML workflow tool on AWS, allowing for production-ready, repeatable cloud orchestration with minimal setup. It is designed for remote ZenML deployments. -### When to Use +## When to Use +Use the Sagemaker orchestrator if: - You are using AWS. - You need a production-grade orchestrator with a UI for tracking pipeline runs. -- You prefer a managed, serverless solution. +- You prefer a managed and serverless solution. -### Functionality -- The Sagemaker orchestrator utilizes Sagemaker Pipelines to create ML pipelines, specifically generating `PipelineStep` for each ZenML step, currently supporting only Sagemaker Processing jobs. +## Functionality +The Sagemaker orchestrator creates a SageMaker `PipelineStep` for each ZenML pipeline step, currently supporting only processing jobs. -### Deployment Requirements -1. Deploy ZenML to the cloud, preferably in the same region as Sagemaker. +## Deployment Requirements +1. Deploy ZenML to the cloud, ideally in the same region as Sagemaker. 2. Ensure connection to the remote ZenML server. -3. Set up necessary IAM permissions for the role/user. - -### Installation -Install required integrations: -```shell -zenml integration install aws s3 -``` - -### Authentication Methods -1. **Service Connector** (Recommended): - ```shell - zenml service-connector register --type aws -i - zenml orchestrator register --flavor=sagemaker --execution_role= - zenml orchestrator connect --connector - zenml stack register -o ... --set - ``` - -2. **Explicit Authentication**: - ```shell - zenml orchestrator register --flavor=sagemaker --execution_role= --aws_access_key_id=... --aws_secret_access_key=... --region=... - zenml stack register -o ... --set - ``` +3. Configure IAM roles with `AmazonSageMakerFullAccess` and `sagemaker.amazonaws.com` as a Principal Service. -3. **Implicit Authentication**: +## Usage Steps +1. Install required integrations: ```shell - zenml orchestrator register --flavor=sagemaker --execution_role= - python run.py # Uses default AWS profile + zenml integration install aws s3 ``` +2. Ensure Docker is installed and running. +3. Set up a remote artifact store and container registry. +4. Authenticate the orchestrator using one of three methods: + - **Service Connector**: + ```shell + zenml service-connector register --type aws -i + zenml orchestrator register --flavor=sagemaker --execution_role= + zenml orchestrator connect --connector + zenml stack register -o ... --set + ``` + - **Explicit Authentication**: + ```shell + zenml orchestrator register --flavor=sagemaker --execution_role= --aws_access_key_id=... --aws_secret_access_key=... --region=... + zenml stack register -o ... --set + ``` + - **Implicit Authentication**: + ```shell + zenml orchestrator register --flavor=sagemaker --execution_role= + python run.py # Authenticates with `default` profile + ``` -### Running Pipelines -To execute a pipeline: +## Running Pipelines +Run any ZenML pipeline using: ```shell python run.py ``` -Expected output indicates the orchestrator is running remotely. - -### Sagemaker UI Access -Access the Sagemaker UI via Sagemaker Studio to view pipeline details and logs. - -### Debugging -If a pipeline fails before the first step, check the Sagemaker UI for error messages and logs. - -### Scheduling and Configuration -- **Scheduling**: Currently not supported; contributions welcome. -- **Pipeline Configuration**: Additional settings can be specified at the pipeline or step level using `SagemakerOrchestratorSettings`. - -Example for step-level configuration: -```python -@step(settings={"orchestrator": sagemaker_orchestrator_settings}) -``` +Monitor pipeline runs through the ZenML dashboard and the Sagemaker UI. -### Warm Pools -Enable Warm Pools to reduce startup time: -```python -sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) -``` +## Debugging +If a pipeline fails before starting, check the Sagemaker UI for error messages and logs. Use Amazon CloudWatch for detailed logging. -### S3 Data Access -- **Import Data**: -```python -sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(input_data_s3_mode="File", input_data_s3_uri="s3://some-bucket-name/folder") -``` -- **Export Data**: -```python -sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(output_data_s3_mode="EndOfJob", output_data_s3_uri="s3://some-results-bucket-name/results") -``` +## Configuration +- **Pipeline and Step Level Configurations**: Use `SagemakerOrchestratorSettings` for instance types and other configurations. +- **Warm Pools**: Enable to reduce startup time: + ```python + sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) + ``` +- **S3 Data Access**: Configure S3 data import/export using `input_data_s3_uri` and `output_data_s3_uri`. -### Tagging -Add tags to pipeline executions and jobs: +## Tagging +Add tags to pipeline executions and jobs for better resource management: ```python pipeline_settings = SagemakerOrchestratorSettings(pipeline_tags={"project": "my-ml-project"}) ``` -### GPU Support -Follow specific instructions to enable CUDA for GPU-backed hardware. +## GPU Configuration +For GPU usage, follow specific instructions to enable CUDA for acceleration. -This summary encapsulates the essential technical details and instructions for using the AWS Sagemaker Orchestrator within ZenML, ensuring clarity and conciseness while retaining critical information. +This summary captures the essential technical details and key points for using the AWS Sagemaker Orchestrator with ZenML, ensuring clarity and conciseness without losing critical information. ================================================== === File: docs/book/component-guide/orchestrators/local.md === -### Local Orchestrator Overview +### Local Orchestrator in ZenML -The Local Orchestrator is a built-in component of ZenML that allows you to run pipelines locally without additional setup. +The local orchestrator is a built-in feature of ZenML that allows you to run pipelines locally without additional setup. #### When to Use - Ideal for beginners starting with ZenML. - Useful for quickly experimenting and debugging new pipelines. #### Deployment -The Local Orchestrator is included with ZenML and requires no extra configuration. +The local orchestrator is included with ZenML and requires no extra configuration. #### Usage -To register and use the Local Orchestrator in your active stack: +To register and use the local orchestrator in your active stack, execute the following commands: ```shell zenml orchestrator register --flavor=local zenml stack register -o ... --set ``` -Run any ZenML pipeline using the Local Orchestrator with: +Run any ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` -For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). +For detailed attributes of the local orchestrator, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). ================================================== @@ -8818,69 +8750,63 @@ For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenm ### Kubernetes Orchestrator Overview -The ZenML `kubernetes` integration allows you to orchestrate and scale ML pipelines on Kubernetes clusters without writing Kubernetes code. It is a lightweight alternative to distributed orchestrators like Airflow or Kubeflow, executing each pipeline step in separate Kubernetes pods managed by a master pod via topological sorting. This orchestrator is faster and simpler to set up than Kubeflow, making it suitable for teams new to distributed orchestration. However, for long-term use, transitioning to Kubeflow is recommended due to its maturity. +The ZenML `kubernetes` integration allows you to orchestrate and scale ML pipelines on Kubernetes clusters without writing Kubernetes code. It is a lightweight alternative to distributed orchestrators like Airflow or Kubeflow, running each pipeline step in separate Kubernetes pods. Unlike Kubeflow, which manages orchestration, ZenML uses a master pod for topological sorting of step execution, making it faster and simpler to set up. -**Warning**: This component is intended for remote ZenML deployments only; using it locally may cause issues. - -### When to Use the Kubernetes Orchestrator -- For a lightweight solution to run pipelines on Kubernetes. -- If you prefer not to maintain Kubeflow Pipelines. -- If you want to avoid managed solutions like Vertex. +**Ideal Use Cases:** +- Lightweight pipeline execution on Kubernetes. +- Avoiding maintenance of Kubeflow Pipelines. +- Not opting for managed solutions like Vertex. ### Deployment Requirements + To deploy the Kubernetes orchestrator, you need: -- A Kubernetes cluster (check the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for deployment options). +- A Kubernetes cluster (remote/cloud or custom). - A remote ZenML server connected to the cluster. +- ZenML `kubernetes` integration installed: + ```shell + zenml integration install kubernetes + ``` +- Docker and kubectl installed. -### Setup Instructions -1. **Install the ZenML Kubernetes Integration**: - ```shell - zenml integration install kubernetes - ``` - -2. **Prerequisites**: - - Docker installed and running. - - `kubectl` installed. - - A remote artifact store and container registry in your stack. - - Optionally, a configured Kubernetes context (run `kubectl config get-contexts`). +### Using the Kubernetes Orchestrator -3. **Service Connector Setup** (recommended): - - If using a Service Connector, register the orchestrator without needing a local `kubectl` context: +1. **With Service Connector:** + - Register the orchestrator without needing local `kubectl`: ```shell zenml orchestrator register --flavor kubernetes zenml orchestrator connect --connector zenml stack register -o ... --set ``` -4. **Without Service Connector**: - - If no Service Connector is available, configure `kubectl` and register the orchestrator: +2. **Without Service Connector:** + - Configure local `kubectl` and register the orchestrator: ```shell zenml orchestrator register --flavor=kubernetes --kubernetes_context= zenml stack register -o ... --set ``` ### Running a Pipeline -To run a ZenML pipeline with the Kubernetes orchestrator: + +To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` -You can check the logs and see the created pods with: +You can view logs and check pod status with: ```shell kubectl get pods -n zenml ``` -### Interacting with Pods -You can interact with pods using labels for easier management: -```shell -kubectl delete pod -n zenml -l pipeline= -``` +### Pod Interaction and Configuration -### Additional Configuration -The orchestrator uses the `zenml` namespace by default and creates a service account with edit permissions. You can customize: -- `kubernetes_namespace`: Specify a different namespace. -- `service_account_name`: Use a specific service account. +- Pods are labeled for easier management (e.g., by pipeline name). +- Default namespace is `zenml`, with a service account `zenml-service-account` created automatically. +- Additional settings can be configured, such as: + - `kubernetes_namespace`: Custom namespace. + - `service_account_name`: Existing service account for RBAC permissions. + +### Advanced Configuration -**Example Configuration**: +You can customize pod settings using `KubernetesOrchestratorSettings`: ```python from zenml.integrations.kubernetes.flavors.kubernetes_orchestrator_flavor import KubernetesOrchestratorSettings @@ -8891,18 +8817,16 @@ kubernetes_settings = KubernetesOrchestratorSettings( "requests": {"cpu": "2", "memory": "4Gi"}, "limits": {"cpu": "4", "memory": "8Gi"}, }, + ... }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" ) - -@pipeline(settings={"orchestrator": kubernetes_settings}) -def my_kubernetes_pipeline(): - ... ``` ### Step-Level Configuration -You can define settings at the step level to override pipeline-level settings: + +You can define settings on a per-step basis to override pipeline-level settings: ```python @step(settings={"orchestrator": k8s_settings}) def train_model(data: dict) -> None: @@ -8910,42 +8834,43 @@ def train_model(data: dict) -> None: ``` ### GPU Configuration -To run steps on GPU, follow additional setup instructions to enable CUDA. -For a complete list of configurable attributes and further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). +For GPU usage, ensure to follow specific instructions for enabling CUDA and customizing settings accordingly. + +For more details on configuration and usage, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/orchestrators.md === -### Orchestrators in ZenML - -**Overview**: The orchestrator is a crucial component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps execute only when all required inputs are available. +# Orchestrators in ZenML -**Docker Integration**: Many ZenML remote orchestrators build Docker images to transport and execute pipeline code. For details on Docker image customization, refer to the [Docker guide](../../how-to/customize-docker-builds/README.md). +## Overview +The orchestrator is a crucial component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps run only when all their required inputs are available. -### When to Use an Orchestrator -The orchestrator is mandatory in the ZenML stack, storing all artifacts from pipeline runs and must be configured in every stack. +### Key Features +- **Artifact Storage**: The orchestrator stores all artifacts produced by pipeline runs. +- **Configuration Requirement**: It must be configured in all ZenML stacks. ### Orchestrator Flavors -ZenML provides various orchestrators, including: - -| Orchestrator | Flavor | Integration | Notes | -|----------------------------------|-----------------|------------------|-------------------------------------| -| [LocalOrchestrator](local.md) | `local` | _built-in_ | Runs pipelines locally. | -| [LocalDockerOrchestrator](local-docker.md) | `local_docker` | _built-in_ | Runs pipelines locally using Docker.| -| [KubernetesOrchestrator](kubernetes.md) | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes. | -| [KubeflowOrchestrator](kubeflow.md) | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | -| [VertexOrchestrator](vertex.md) | `vertex` | `gcp` | Runs pipelines in Vertex AI. | -| [SagemakerOrchestrator](sagemaker.md) | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | -| [AzureMLOrchestrator](azureml.md) | `azureml` | `azure` | Runs pipelines in AzureML. | -| [TektonOrchestrator](tekton.md) | `tekton` | `tekton` | Runs pipelines using Tekton. | -| [AirflowOrchestrator](airflow.md) | `airflow` | `airflow` | Runs pipelines using Airflow. | -| [SkypilotAWSOrchestrator](skypilot-vm.md) | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs using SkyPilot. | -| [SkypilotGCPOrchestrator](skypilot-vm.md) | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs using SkyPilot. | +ZenML provides several orchestrator flavors, including: + +| Orchestrator | Flavor | Integration | Notes | +|----------------------------------|-----------------|-------------------|--------------------------------------| +| [LocalOrchestrator](local.md) | `local` | _built-in_ | Runs pipelines locally. | +| [LocalDockerOrchestrator](local-docker.md) | `local_docker` | _built-in_ | Runs pipelines locally using Docker. | +| [KubernetesOrchestrator](kubernetes.md) | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes. | +| [KubeflowOrchestrator](kubeflow.md) | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | +| [VertexOrchestrator](vertex.md) | `vertex` | `gcp` | Runs pipelines in Vertex AI. | +| [SagemakerOrchestrator](sagemaker.md) | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | +| [AzureMLOrchestrator](azureml.md) | `azureml` | `azure` | Runs pipelines in AzureML. | +| [TektonOrchestrator](tekton.md) | `tekton` | `tekton` | Runs pipelines using Tekton. | +| [AirflowOrchestrator](airflow.md) | `airflow` | `airflow` | Runs pipelines using Airflow. | +| [SkypilotAWSOrchestrator](skypilot-vm.md) | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs using SkyPilot. | +| [SkypilotGCPOrchestrator](skypilot-vm.md) | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs using SkyPilot. | | [SkypilotAzureOrchestrator](skypilot-vm.md) | `vm_azure` | `skypilot[azure]` | Runs pipelines in Azure VMs using SkyPilot. | -| [HyperAIOrchestrator](hyperai.md) | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | -| [Custom Implementation](custom.md) | _custom_ | | Extend the orchestrator abstraction. | +| [HyperAIOrchestrator](hyperai.md) | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | +| [Custom Implementation](custom.md) | _custom_ | | Extend the orchestrator abstraction. | To view available orchestrator flavors, use: ```shell @@ -8953,7 +8878,7 @@ zenml orchestrator flavor list ``` ### Usage -Direct interaction with ZenML orchestrators in code is not required. Simply ensure the desired orchestrator is part of your active ZenML stack and execute your pipeline with: +You don't need to interact directly with the orchestrator in your code. Simply ensure the desired orchestrator is part of your active ZenML stack, and run your pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` @@ -8967,8 +8892,8 @@ pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` -### Specifying Resources -To specify hardware requirements for pipeline steps, refer to the [runtime configuration guide](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). For unsupported orchestrators, consider using [step operators](../step-operators/step-operators.md). +### Resource Specification +For steps requiring specific hardware, specify resources as detailed [here](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). If unsupported, consider using [step operators](../step-operators/step-operators.md). ================================================== @@ -8976,21 +8901,22 @@ To specify hardware requirements for pipeline steps, refer to the [runtime confi ### Databricks Orchestrator Overview -The Databricks Orchestrator, part of the ZenML integration, enables running ML pipelines on Databricks, leveraging its distributed computing capabilities and optimized environment for big data processing. +**Databricks** is a unified data analytics platform that integrates data warehouses and lakes, optimizing big data processing and machine learning (ML). The **Databricks orchestrator** is a ZenML integration that allows running ML pipelines on Databricks, utilizing its distributed computing capabilities. #### When to Use -- If you are already using Databricks for data and ML workloads. -- To utilize Databricks' distributed computing for ML pipelines. -- For a managed solution that integrates with other Databricks services. +- If you're using Databricks for data and ML workloads. +- To leverage Databricks' distributed computing for ML pipelines. +- For a managed solution that integrates with Databricks services. +- To utilize Databricks' optimization for big data processing. #### Prerequisites - An active Databricks workspace (AWS, Azure, GCP). - A Databricks account or service account with permissions to create and run jobs. #### How It Works -1. **Wheel Packages**: ZenML creates a Python wheel package containing pipeline code and dependencies. -2. **Job Definition**: ZenML uses the Databricks SDK to define a job that includes pipeline steps and cluster settings (e.g., Spark version, worker count). -3. **Execution**: The job retrieves the wheel package and executes the pipeline, ensuring steps run in the correct order. +1. **Wheel Packages**: ZenML creates a Python wheel package containing code and dependencies for your pipeline. +2. **Job Definition**: ZenML uses the Databricks SDK to create a job definition that specifies pipeline steps and their execution order. +3. **Execution**: The job retrieves the wheel package and runs it on a specified cluster configuration. 4. **Monitoring**: ZenML retrieves logs and job status for monitoring. #### Usage Steps @@ -8998,21 +8924,24 @@ The Databricks Orchestrator, part of the ZenML integration, enables running ML p ```shell zenml integration install databricks ``` + 2. **Register Orchestrator**: ```shell zenml orchestrator register databricks_orchestrator --flavor=databricks --host="https://xxxxx.x.azuredatabricks.net" --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` + 3. **Add to Stack**: ```shell zenml stack register databricks_stack -o databricks_orchestrator ... --set ``` + 4. **Run Pipeline**: ```shell python run.py ``` #### Databricks UI -Access pipeline run details and logs through the Databricks UI. Retrieve the UI URL in Python: +Access pipeline run details and logs via the Databricks UI. Retrieve the UI URL in Python: ```python from zenml.client import Client @@ -9025,12 +8954,15 @@ Use Databricks' native scheduling capability: ```python from zenml.config.schedule import Schedule -pipeline_instance.run(schedule=Schedule(cron_expression="*/5 * * * *")) +pipeline_instance.run( + schedule=Schedule(cron_expression="*/5 * * * *") +) ``` -- Only `cron_expression` is supported; Java Timezone IDs are required. +- Only `cron_expression` is supported. +- Use Java Timezone IDs in the `cron_expression`. #### Additional Configuration -Customize settings using `DatabricksOrchestratorSettings`: +Customize the orchestrator using `DatabricksOrchestratorSettings`: ```python from zenml.integrations.databricks.flavors.databricks_orchestrator_flavor import DatabricksOrchestratorSettings @@ -9038,6 +8970,7 @@ databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-scala2.12", num_workers="3", node_type_id="Standard_D4s_v5", + autoscale=(2, 3), schedule_timezone="America/Los_Angeles" ) ``` @@ -9049,21 +8982,18 @@ def my_pipeline(): ``` #### GPU Support -To enable GPU support, adjust `spark_version` and `node_type_id`: +To enable GPU support, change `spark_version` and `node_type_id`: ```python databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-gpu-ml-scala2.12", - node_type_id="Standard_NC24ads_A100_v4" + node_type_id="Standard_NC24ads_A100_v4", + autoscale=(1, 2), ) ``` -For CUDA acceleration, follow specific instructions to configure GPU settings. - -#### Documentation Links -- [Databricks Service Account Permissions](https://docs.databricks.com/dev-tools/api/latest/authentication.html) -- [Supported Timezones](https://docs.oracle.com/middleware/1221/wcs/tag-ref/MISC/TimeZones.html) -- [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.flavors.databricks_orchestrator_flavor.DatabricksOrchestratorSettings) +Follow additional instructions to enable CUDA for GPU acceleration. -This summary provides a concise overview of the Databricks Orchestrator, its usage, and configuration details necessary for effective implementation. +### References +- For a full list of configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.orchestrators.databricks_orchestrator.DatabricksOrchestrator). ================================================== @@ -9071,60 +9001,73 @@ This summary provides a concise overview of the Databricks Orchestrator, its usa ### SkyPilot VM Orchestrator Overview -The SkyPilot VM Orchestrator, integrated with ZenML, enables provisioning and management of virtual machines (VMs) across supported cloud providers via the SkyPilot framework. It simplifies running machine learning workloads in the cloud, optimizing for cost and GPU availability without the complexities of infrastructure management. +The **SkyPilot VM Orchestrator** is an integration by ZenML for provisioning and managing virtual machines (VMs) across supported cloud providers using the SkyPilot framework. It simplifies running machine learning workloads in the cloud, focusing on cost savings and high GPU availability without the complexities of cloud infrastructure management. -**Important Note:** This component is intended for remote ZenML deployments only; using it locally may cause unexpected issues. +**Important Note:** This component is intended for remote ZenML deployments only. Using it locally may cause unexpected behavior. ### When to Use Use the SkyPilot VM Orchestrator if you: -- Aim to maximize cost savings with spot VMs and auto-selection of the cheapest options. -- Require high GPU availability across multiple cloud zones. -- Prefer not to maintain Kubernetes or pay for managed solutions like Sagemaker. +- Want cost savings via spot VMs and automatic selection of the cheapest options. +- Require high GPU availability across various zones/regions/clouds. +- Prefer not to maintain Kubernetes solutions or pay for managed services like SageMaker. ### Functionality -The orchestrator automates VM provisioning and scaling, supporting on-demand and managed spot VMs. It includes: -- An optimizer for selecting the most cost-effective VM options. -- An autostop feature to clean up idle clusters, reducing unnecessary costs. +- **Provisioning and Scaling:** Automatically launches VMs for pipelines, supporting on-demand and managed spot VMs. +- **Optimizer:** Selects the cheapest VM/zone/region/cloud. +- **Autostop Feature:** Cleans up idle clusters to prevent unnecessary costs. -**Configuration Note:** You can specify VM types and resources for each pipeline step individually. +**Configuration:** You can specify VM types and resources for each pipeline step. For GPU support in Docker containers, configure `docker_run_args=["--gpus=all"]`. -### Deployment Requirements +### Deployment -To deploy the SkyPilot VM Orchestrator: -- Ensure you have permissions to provision VMs on your chosen cloud provider. -- Configure the orchestrator using service connectors for authentication. +No special steps are needed for deployment. Ensure you have permissions to provision VMs on your chosen cloud provider and configure the SkyPilot orchestrator using service connectors. **Supported Platforms:** AWS, GCP, Azure. -### Installation Steps +### Usage Steps 1. **Install SkyPilot Integration:** + - **AWS:** + ```shell + pip install "zenml[connectors-aws]" + zenml integration install aws skypilot_aws + ``` + - **GCP:** + ```shell + pip install "zenml[connectors-gcp]" + zenml integration install gcp skypilot_gcp + ``` + - **Azure:** + ```shell + pip install "zenml[connectors-azure]" + zenml integration install azure skypilot_azure + ``` - For AWS: - ```shell - pip install "zenml[connectors-aws]" - zenml integration install aws skypilot_aws - ``` +2. **Configure Service Connector:** + - Follow specific instructions for AWS, GCP, Azure, or Lambda Labs to set up authentication. - For GCP: +3. **Register Orchestrator:** ```shell - pip install "zenml[connectors-gcp]" - zenml integration install gcp skypilot_gcp + zenml orchestrator register --flavor + zenml orchestrator connect --connector + zenml stack register -o ... --set ``` - For Azure: - ```shell - pip install "zenml[connectors-azure]" - zenml integration install azure skypilot_azure - ``` +### Additional Configuration -2. **Configure Service Connectors:** Follow the specific instructions for AWS, GCP, Azure, or Lambda Labs to set up service connectors for authentication. +You can customize the orchestrator settings based on the cloud provider, such as: +- `instance_type` +- `cpus` +- `memory` +- `accelerators` +- `region` +- `zone` +- `disk_size` -### Example Configuration for Cloud Providers +### Example Configuration for AWS -#### AWS Example: ```python from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings @@ -9136,50 +9079,34 @@ skypilot_settings = SkypilotAWSOrchestratorSettings( region="us-west-1", cluster_name="my_cluster", idle_minutes_to_autostop=60, + down=True, docker_run_args=["--gpus=all"] ) @pipeline(settings={"orchestrator": skypilot_settings}) -``` - -#### GCP Example: -```python -from zenml.integrations.skypilot_gcp.flavors.skypilot_orchestrator_gcp_vm_flavor import SkypilotGCPOrchestratorSettings - -skypilot_settings = SkypilotGCPOrchestratorSettings( - cpus="2", - memory="16", - accelerators="V100:2", - use_spot=True, - region="us-west1", - cluster_name="my_cluster", - idle_minutes_to_autostop=60, - docker_run_args=["--gpus=all"] -) - -@pipeline(settings={"orchestrator": skypilot_settings}) +def my_pipeline(): + pass ``` ### Configuring Step-Specific Resources -You can configure resources for each pipeline step individually. If no specific settings are provided, the orchestrator defaults to the general settings. To disable step-based settings: +The orchestrator allows configuring resources for each pipeline step. If no specific settings are provided, it defaults to the orchestrator's settings. To disable step-based settings, use: ```shell zenml orchestrator update --disable_step_based_settings=True ``` -**Example for a Resource-Intensive Step:** +**Example for Step-Specific Resources:** ```python @step(settings={"orchestrator": high_resource_settings}) def my_resource_intensive_step(): pass ``` -### Additional Configuration Options - -Key attributes for configuring the orchestrator include: -- `instance_type`, `cpus`, `memory`, `accelerators`, `region`, `zone`, `image_id`, `disk_size`, `disk_tier`, `cluster_name`, `idle_minutes_to_autostop`, `stream_logs`, and `docker_run_args`. +### Important Notes +- Certain features may not be supported across different cloud providers. +- For optimal performance and cost, tailor resources for each pipeline step as needed. -For a comprehensive list of attributes and further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-skypilot/#zenml.integrations.skypilot.flavors.skypilot_orchestrator_base_vm_flavor.SkypilotBaseOrchestratorSettings). +For further details, refer to the [SkyPilot documentation](https://skypilot.readthedocs.io/en/latest/index.html) and ZenML's SDK documentation. ================================================== @@ -9187,52 +9114,52 @@ For a comprehensive list of attributes and further details, refer to the [SDK do # AzureML Orchestrator Summary -**Overview**: AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models, supporting the entire ML lifecycle. +## Overview +AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models. It supports the entire ML lifecycle, from data preparation to monitoring. -## When to Use AzureML Orchestrator -- If you are using Azure. -- For a production-grade orchestrator. -- To track pipeline runs via a UI. -- For a managed pipeline solution. +## Use Cases +Use AzureML orchestrator if: +- You are using Azure. +- You need a production-grade orchestrator. +- You want a UI to track pipeline runs. +- You prefer a managed solution for pipelines. -## Functionality -The ZenML AzureML orchestrator utilizes the AzureML Python SDK v2 to create `CommandComponent` for each ZenML step, assembling them into a pipeline. +## Implementation +The ZenML AzureML orchestrator uses the AzureML Python SDK v2 to create AzureML `CommandComponent` for each ZenML step, assembling them into a pipeline. ## Deployment -1. Deploy ZenML to the cloud (preferably in the same region as AzureML). +To deploy the AzureML orchestrator: +1. Deploy ZenML to the cloud, ideally in the same region as AzureML. 2. Ensure connection to the remote ZenML server. -### Quick Deployment Options -- Use the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md). -- Refer to the [ZenML Azure Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). - -## Usage Requirements +## Requirements +To use the AzureML orchestrator: - Install ZenML Azure integration: ```shell zenml integration install azure ``` -- Docker installed or a remote image builder. -- A remote artifact store and container registry. -- An Azure resource group with AzureML workspace. +- Have Docker installed or a remote image builder. +- Include a remote artifact store and container registry in your stack. +- Set up an Azure resource group with an AzureML workspace. ### Authentication Methods 1. **Default Authentication**: Combines Azure hosting and local development credentials. -2. **Service Principal Authentication (recommended)**: Requires creating a service principal in Azure and registering a ZenML Azure Service Connector: +2. **Service Principal Authentication (recommended)**: Create a service principal on Azure, assign permissions, and register a ZenML Azure Service Connector: ```bash zenml service-connector register --type azure -i zenml orchestrator connect -c ``` ## Docker Integration -ZenML builds a Docker image for each pipeline run as `/zenml:`. +ZenML builds a Docker image for each pipeline run, named `/zenml:`. ## AzureML UI -AzureML workspace includes a Machine Learning studio for managing and debugging pipelines. +The AzureML workspace includes a Machine Learning studio for managing and debugging pipelines. You can inspect steps and view execution logs. ## Configuration Settings -Use `AzureMLOrchestratorSettings` to configure compute resources. Three modes are supported: +The `AzureMLOrchestratorSettings` class configures compute resources with three modes: -1. **Serverless Compute** (Default): +1. **Serverless Compute (Default)**: ```python azureml_settings = AzureMLOrchestratorSettings(mode="serverless") ``` @@ -9261,11 +9188,11 @@ Use `AzureMLOrchestratorSettings` to configure compute resources. Three modes ar ``` ## Scheduling Pipelines -Pipelines can be scheduled using `JobSchedules` with cron expressions or intervals: +AzureML orchestrator supports scheduled pipeline runs using `JobSchedules` with cron expressions or intervals: ```python pipeline.run(schedule=Schedule(cron_expression="*/5 * * * *")) ``` -**Note**: ZenML schedules runs but does not manage the lifecycle of the schedule; users must do this via the Azure UI. +Note: ZenML only initiates the schedule; users must manage it via the Azure UI. For more details on compute sizes, refer to the [AzureML documentation](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2#supported-vm-series-and-sizes). @@ -9273,78 +9200,80 @@ For more details on compute sizes, refer to the [AzureML documentation](https:// === File: docs/book/component-guide/orchestrators/tekton.md === -# Tekton Orchestrator +# Tekton Orchestrator Documentation Summary -**Tekton** is an open-source framework for creating CI/CD systems, enabling developers to build, test, and deploy applications across various environments. It is designed for use within a remote ZenML deployment scenario. +## Overview +Tekton is an open-source framework for CI/CD, enabling developers to build, test, and deploy applications across various environments. This component is designed for remote ZenML deployments only. ## When to Use Tekton -Use the Tekton orchestrator if: -- You need a production-grade orchestrator. -- You want a UI to track pipeline runs. -- You are comfortable with Kubernetes setup and maintenance. -- You can deploy and maintain Tekton Pipelines on your cluster. +- Proven production-grade orchestrator. +- UI for tracking pipeline runs. +- Familiarity with Kubernetes or willingness to set it up. +- Ability to deploy and maintain Tekton Pipelines. ## Deployment Steps -1. **Set Up a Kubernetes Cluster**: Choose your cloud provider (AWS, GCP, Azure) and follow the steps to set up a cluster. -2. **Install Tekton Pipelines**: After setting up your cluster, install Tekton Pipelines. - -### Example Commands -- **AWS**: - ```powershell - aws eks --region REGION update-kubeconfig --name CLUSTER_NAME - ``` -- **GCP**: - ```powershell - gcloud container clusters get-credentials CLUSTER_NAME - ``` -- **Azure**: - ```powershell - az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME - ``` +1. **Set Up Kubernetes Cluster**: + - **AWS**: + - Use an EKS cluster. + - Configure `kubectl`: + ```powershell + aws eks --region REGION update-kubeconfig --name CLUSTER_NAME + ``` + - Install Tekton Pipelines. + - **GCP**: + - Use a GKE cluster. + - Configure `kubectl`: + ```powershell + gcloud container clusters get-credentials CLUSTER_NAME + ``` + - Install Tekton Pipelines. + - **Azure**: + - Use an AKS cluster. + - Configure `kubectl`: + ```powershell + az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME + ``` + - Install Tekton Pipelines. + +**Note**: Ensure Tekton Pipelines version >=0.38.3 is used. -**Note**: Ensure Tekton Pipelines version is >=0.38.3. +## Usage Requirements +- Install ZenML `tekton` integration: + ```shell + zenml integration install tekton -y + ``` +- Docker installed and running. +- Remote artifact store and container registry configured. +- Optional: `kubectl` installed for context management. -## Usage -To use the Tekton orchestrator: -1. Install the ZenML `tekton` integration: - ```shell - zenml integration install tekton -y - ``` -2. Ensure Docker is installed and running. -3. Deploy Tekton pipelines on a remote cluster. -4. Obtain the Kubernetes context using: - ```shell - kubectl config get-contexts - ``` - -### Registering the Orchestrator +## Registering the Orchestrator 1. **With Service Connector**: - ```shell - zenml orchestrator register --flavor tekton - zenml orchestrator connect --connector - zenml stack register -o ... --set - ``` + ```shell + zenml orchestrator register --flavor tekton + zenml orchestrator connect --connector + zenml stack register -o ... --set + ``` 2. **Without Service Connector**: - ```shell - zenml orchestrator register --flavor=tekton --kubernetes_context= - zenml stack register -o ... --set - ``` + ```shell + zenml orchestrator register --flavor=tekton --kubernetes_context= + zenml stack register -o ... --set + ``` -### Running a Pipeline -Run a ZenML pipeline using: +## Running a Pipeline +Execute a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` -### Tekton UI -Access the Tekton UI for pipeline details: +## Tekton UI +Access the Tekton UI for pipeline run details: ```bash kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}' ``` ## Additional Configuration -Use `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: +Configure `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: ```python from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings @@ -9356,7 +9285,7 @@ tekton_settings = TektonOrchestratorSettings( ) ``` -Specify hardware requirements with `ResourceSettings`: +Specify hardware requirements using `ResourceSettings`: ```python resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` @@ -9372,8 +9301,8 @@ def my_step(): ... ``` -## Enabling CUDA for GPU -For GPU usage, follow specific instructions to enable CUDA for full acceleration. +## GPU Configuration +For GPU usage, follow specific instructions to enable CUDA for acceleration. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/#zenml.integrations.tekton.orchestrators.tekton_orchestrator.TektonOrchestrator). @@ -9383,89 +9312,87 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr ### Kubeflow Orchestrator Overview -The Kubeflow orchestrator is a ZenML integration that utilizes [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) for running pipelines. It is designed for remote ZenML deployments and is not recommended for local setups. +The Kubeflow orchestrator is a ZenML integration that utilizes [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) for running pipelines. It is designed for remote ZenML deployments and is not suitable for local setups. -#### When to Use +### When to Use Use the Kubeflow orchestrator if you need: - A production-grade orchestrator. - A UI for tracking pipeline runs. -- Familiarity with Kubernetes or willingness to set up a cluster. +- Familiarity with Kubernetes or willingness to set it up. - Capability to deploy and maintain Kubeflow Pipelines. -#### Deployment Steps +### Deployment Steps -To deploy ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. Here’s how to do it on various cloud providers: +To deploy ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. Here’s a brief guide for various cloud providers: -**AWS:** -1. Set up an [EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). -2. Install AWS CLI and configure it. -3. Install `kubectl` and connect to EKS: +#### AWS +1. Set up an EKS cluster. +2. Install AWS CLI and configure `kubectl`: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` -4. Install Kubeflow Pipelines. +3. Install Kubeflow Pipelines. -**GCP:** -1. Set up a [GKE cluster](https://cloud.google.com/kubernetes-engine/docs/quickstart). -2. Install Google Cloud CLI and configure it. -3. Install `kubectl` and connect to GKE: +#### GCP +1. Set up a GKE cluster. +2. Install Google Cloud CLI and configure `kubectl`: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` -4. Install Kubeflow Pipelines. +3. Install Kubeflow Pipelines. -**Azure:** -1. Set up an [AKS cluster](https://azure.microsoft.com/en-in/services/kubernetes-service/#documentation). -2. Install `az CLI` and configure it. -3. Install `kubectl` and connect to AKS: +#### Azure +1. Set up an AKS cluster. +2. Install Azure CLI and configure `kubectl`: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` -4. Install Kubeflow Pipelines. Note: Change the default runtime to `k8sapi` if using `containerd`. +3. Install Kubeflow Pipelines. +4. Adjust `containerRuntimeExecutor` in the workflow controller's ConfigMap if necessary. -**Other Kubernetes:** +#### Other Kubernetes 1. Set up a Kubernetes cluster. -2. Install `kubectl` and connect to your cluster. +2. Install `kubectl` and configure it. 3. Install Kubeflow Pipelines. -#### Usage Requirements +### Usage Requirements To use the Kubeflow orchestrator: -- A Kubernetes cluster with Kubeflow Pipelines installed. +- A Kubernetes cluster with Kubeflow Pipelines. - A remote ZenML server. -- ZenML `kubeflow` integration installed: +- Install the ZenML `kubeflow` integration: ```shell zenml integration install kubeflow ``` - Docker installed (unless using a remote Image Builder). -- Optional: `kubectl` installed. +- Optionally, `kubectl` installed. -#### Registering the Orchestrator +### Registering the Orchestrator -1. With a Service Connector: +1. **With Service Connector**: ```shell zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator register --flavor kubeflow --connector --resource-id zenml stack register -o -a -c ``` -2. Without a Service Connector: +2. **Without Service Connector**: ```shell zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack register -o -a -c ``` -#### Running a Pipeline +### Running a Pipeline To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` -#### Kubeflow UI +### Kubeflow UI -Access the Kubeflow UI for pipeline run details: +Access the Kubeflow UI to view pipeline run details: ```python from zenml.client import Client @@ -9473,41 +9400,31 @@ pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` -#### Additional Configuration +### Additional Configuration + +You can configure the Kubeflow orchestrator with `KubeflowOrchestratorSettings` for: +- `client_args`: KFP client arguments. +- `user_namespace`: Namespace for experiments. +- `pod_settings`: Node selectors and tolerations. -Use `KubeflowOrchestratorSettings` for additional configurations: +Example: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( - client_args={}, user_namespace="my_namespace", - pod_settings={ - "affinity": {...}, - "tolerations": [...] - } + pod_settings={"affinity": {...}, "tolerations": [...]} ) ``` -#### GPU Support - -For GPU-backed hardware, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA. - -#### Multi-Tenancy +### Multi-Tenancy Considerations -For multi-tenant deployments, include the `kubeflow_hostname` parameter when registering: +For multi-tenant deployments, include `kubeflow_hostname` when registering: ```shell zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` -Set the user namespace and authentication credentials in `KubeflowOrchestratorSettings`. -#### Using Secrets - -Store credentials as secrets: -```shell -zenml secret create kubeflow_secret --username=admin --password=abc123 -``` -Use them in settings: +Set the namespace and authentication credentials: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="{{kubeflow_secret.username}}", @@ -9516,43 +9433,51 @@ kubeflow_settings = KubeflowOrchestratorSettings( ) ``` -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). +### Using Secrets + +Create secrets for sensitive information: +```shell +zenml secret create kubeflow_secret --username=admin --password=abc123 +``` + +### Conclusion + +For detailed configuration and attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/vertex.md === -# Google Cloud Vertex AI Orchestrator +# Google Cloud Vertex AI Orchestrator Summary ## Overview -Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP) designed for running production-ready, repeatable ML pipelines with minimal setup. It is intended for use in remote ZenML deployment scenarios. +Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP) designed for running production-ready, repeatable cloud orchestrators with minimal setup. It is intended for use in remote ZenML deployment scenarios. ## When to Use Use the Vertex orchestrator if: - You are using GCP. - You need a production-grade orchestrator with a UI for tracking pipeline runs. -- You prefer a managed, serverless solution. +- You prefer a managed, serverless solution for running pipelines. ## Deployment Requirements -1. **Deploy ZenML to the cloud**: Recommended in the same GCP project as Vertex infrastructure. -2. **Enable Vertex APIs**: Ensure relevant APIs are enabled in your GCP project. -3. **Install ZenML GCP integration**: - ```shell - zenml integration install gcp - ``` -4. **Install Docker**. -5. **Set up a remote artifact store and container registry**. -6. **Configure GCP credentials**: Use a user account or service accounts with appropriate permissions. +1. Deploy ZenML to the cloud, ideally in the same GCP project as Vertex infrastructure. +2. Ensure connection to the remote ZenML server. +3. Enable Vertex-related APIs on your GCP project. -### GCP Credentials and Permissions -- **Authentication Options**: - - Use `gcloud` CLI. - - Configure with a service account key file. - - Recommended: Use a GCP Service Connector for better security. +## Usage Requirements +- Install ZenML `gcp` integration: + ```shell + zenml integration install gcp + ``` +- Install and run Docker. +- Set up a remote artifact store and container registry. +- Obtain GCP credentials with appropriate permissions. -### Vertex AI Pipeline Components -1. **ZenML Client Environment**: Runs ZenML code and requires permissions to create jobs in Vertex Pipelines. -2. **Vertex AI Pipeline Environment**: Runs pipeline steps and requires a workload service account with permissions to execute Vertex AI pipelines. +### GCP Credentials and Permissions +You need a GCP user account or service accounts with permissions for: +- Creating jobs in Vertex Pipelines (e.g., `Vertex AI User` role). +- Running Vertex AI pipelines (e.g., `Vertex AI Service Agent` role). +- Writing to the artifact store (e.g., `Storage Object Creator Role`). ### Configuration Use-Cases 1. **Local `gcloud` CLI with User Account**: @@ -9577,24 +9502,24 @@ Use the Vertex orchestrator if: zenml orchestrator connect --connector ``` -3. **GCP Service Connector with Different Service Accounts**: - - Create multiple service accounts for different permissions. - - Register the service connector and orchestrator similarly to the single service account case. +3. **GCP Service Connector with Different Service Accounts**: + - Requires multiple service accounts with specific permissions. + - Register the service connector and orchestrator similarly as above. ### Configuring the Stack -To use the orchestrator in your stack: +To register and activate a stack with the orchestrator: ```shell zenml stack register -o ... --set ``` ### Running Pipelines -Run a ZenML pipeline with: +Run a ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Vertex UI -Access pipeline run details and logs via the Vertex UI. Get the URL programmatically: +Access pipeline run details via the Vertex UI. Get the URL programmatically: ```python from zenml.client import Client @@ -9603,68 +9528,94 @@ orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Scheduling Pipelines -Use the native scheduling capability: +Use native scheduling capabilities: ```python from zenml.config.schedule import Schedule -# Schedule every 5 minutes -pipeline_instance.run(schedule=Schedule(cron_expression="*/5 * * * *")) - -# Schedule every hour for a specific time window -pipeline_instance.run(schedule=Schedule(cron_expression="0 * * * *", start_time=datetime.datetime.now() + datetime.timedelta(days=1), end_time=datetime.datetime.now() + datetime.timedelta(days=3))) +pipeline_instance.run( + schedule=Schedule(cron_expression="*/5 * * * *") +) ``` ### Additional Configuration -Configure `VertexOrchestratorSettings` for labels or GPU settings: +Configure labels and resource settings: ```python from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings -from zenml.config.resource_settings import ResourceSettings vertex_settings = VertexOrchestratorSettings(labels={"key": "value"}) resource_settings = ResourceSettings(cpu_count=8, memory="16GB") +``` -# For GPU -vertex_settings = VertexOrchestratorSettings(pod_settings={"node_selectors": {"cloud.google.com/gke-accelerator": "NVIDIA_TESLA_A100"}}) +For GPU usage: +```python +vertex_settings = VertexOrchestratorSettings( + pod_settings={"node_selectors": {"cloud.google.com/gke-accelerator": "NVIDIA_TESLA_A100"}} +) resource_settings = ResourceSettings(gpu_count=1) ``` ### Enabling CUDA for GPU -Follow specific instructions to enable CUDA for GPU acceleration when using the orchestrator. +Follow specific instructions to enable CUDA for GPU acceleration. -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.orchestrators.vertex_orchestrator.VertexOrchestrator). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.flavors.vertex_orchestrator_flavor.VertexOrchestratorSettings). ================================================== === File: docs/book/component-guide/orchestrators/custom.md === -### Summary: Developing a Custom Orchestrator in ZenML +### Developing a Custom Orchestrator in ZenML #### Overview -To develop a custom orchestrator in ZenML, it's essential to understand the framework's component flavor concepts. Refer to the general guide on writing custom component flavors for foundational knowledge. +To create a custom orchestrator in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Implementation -ZenML provides a `BaseOrchestrator` class that abstracts orchestration details and offers a simplified interface. Key components include: - -- **BaseOrchestratorConfig**: Configuration base class for orchestrators. -- **BaseOrchestrator**: Abstract class requiring implementation of: - - `prepare_or_run_pipeline(deployment, stack, environment)`: Prepares and executes the pipeline. - - `get_orchestrator_run_id()`: Returns a unique run ID for the active orchestrator run. - -- **BaseOrchestratorFlavor**: Abstract class for orchestrator flavors, requiring: - - `name`: Name of the flavor. - - `type`: Returns `StackComponentType.ORCHESTRATOR`. - - `config_class`: Returns `BaseOrchestratorConfig`. - - `implementation_class`: Returns the orchestrator implementation class. - -#### Steps to Build a Custom Orchestrator -1. **Create Orchestrator Class**: Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker images. -2. **Implement Required Methods**: - - `prepare_or_run_pipeline(...)`: Convert the pipeline to a compatible format for your orchestration tool. - - `get_orchestrator_run_id()`: Ensure it returns a consistent ID for all steps in a pipeline run. -3. **Create Flavor Class**: Inherit from `BaseOrchestratorFlavor` and define the flavor name. - -#### Registering the Custom Orchestrator -Use the CLI to register the orchestrator flavor: +ZenML's `BaseOrchestrator` abstracts ZenML-specific details, providing a simplified interface for orchestration tools. + +```python +from abc import ABC, abstractmethod +from typing import Any, Dict, Type +from zenml.models import PipelineDeploymentResponseModel +from zenml.enums import StackComponentType +from zenml.stack import StackComponent, StackComponentConfig, Stack, Flavor + +class BaseOrchestratorConfig(StackComponentConfig): + """Base class for all ZenML orchestrator configurations.""" + +class BaseOrchestrator(StackComponent, ABC): + @abstractmethod + def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> Any: + """Prepares and runs the pipeline or returns an intermediate representation.""" + + @abstractmethod + def get_orchestrator_run_id(self) -> str: + """Returns a unique run ID for the active orchestrator run.""" + +class BaseOrchestratorFlavor(Flavor): + @property + @abstractmethod + def name(self): + """Returns the name of the flavor.""" + + @property + def type(self) -> StackComponentType: + return StackComponentType.ORCHESTRATOR + + @property + def config_class(self) -> Type[BaseOrchestratorConfig]: + return BaseOrchestratorConfig + + @property + @abstractmethod + def implementation_class(self) -> Type["BaseOrchestrator"]: + """Implementation class for this flavor.""" +``` + +#### Creating a Custom Orchestrator +1. **Inherit from `BaseOrchestrator`:** Implement `prepare_or_run_pipeline(...)` and `get_orchestrator_run_id()`. +2. **Configuration Class:** Inherit from `BaseOrchestratorConfig` for custom parameters. +3. **Flavor Class:** Inherit from `BaseOrchestratorFlavor` and define the flavor's name. + +To register the orchestrator flavor, use: ```shell zenml orchestrator flavor register ``` @@ -9673,25 +9624,31 @@ Example: zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` +**Note:** Initialize ZenML at the root of your repository for proper flavor resolution. + #### Implementation Guide -1. **Orchestrator Class**: Inherit from `ContainerizedOrchestrator` if using containers. -2. **Method Implementations**: - - Handle scheduling if supported. - - Loop through pipeline steps, configuring commands and arguments for the orchestration tool. - - Ensure environment variables are set correctly. - - Manage step execution order, especially for upstream dependencies. +1. **Create Orchestrator Class:** Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker. +2. **Implement Methods:** + - `prepare_or_run_pipeline(...)`: Convert the pipeline for your orchestration tool and run it. + - `get_orchestrator_run_id()`: Return a unique ID for each pipeline run. #### Optional Features -- **Scheduling**: Handle `deployment.schedule` if supported; otherwise, log a warning or raise an exception. -- **Resource Specification**: Manage CPU, GPU, or memory settings from `step.config.resource_settings`. +- **Scheduling:** Handle `deployment.schedule` if supported. +- **Resource Specification:** Manage resources like CPUs/GPUs from `step.config.resource_settings`. #### Code Sample ```python +from typing import Dict +from zenml.entrypoints import StepEntrypointConfiguration +from zenml.models import PipelineDeploymentResponseModel +from zenml.orchestrators import ContainerizedOrchestrator +from zenml.stack import Stack + class MyOrchestrator(ContainerizedOrchestrator): def get_orchestrator_run_id(self) -> str: ... - def prepare_or_run_pipeline(self, deployment, stack, environment): + def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> None: if deployment.schedule: ... for step_name, step in deployment.step_configurations.items(): @@ -9701,125 +9658,107 @@ class MyOrchestrator(ContainerizedOrchestrator): ... ``` -#### Enabling GPU Support -For GPU usage, follow specific instructions to enable CUDA for optimal performance. - -For a complete example of a custom orchestrator, refer to the provided GitHub link. +#### Enabling CUDA for GPU +For GPU support, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for acceleration. ================================================== === File: docs/book/how-to/debug-and-solve-issues.md === -# ZenML Debugging Guide +# Debugging Guide for ZenML -This guide provides best practices for debugging common issues in ZenML and obtaining help. +This guide provides best practices for debugging common issues in ZenML and obtaining assistance. -## When to Get Help -Before seeking assistance, follow this checklist: -- Search Slack using the built-in search function. -- Check [GitHub issues](https://github.com/zenml-io/zenml/issues). -- Use the search bar on the [documentation site](https://docs.zenml.io). +## When to Seek Help +Before asking for help, follow this checklist: +- Search Slack, GitHub issues, and the ZenML documentation. - Review the [common errors](debug-and-solve-issues.md#most-common-errors) section. - Analyze [additional logs](debug-and-solve-issues.md#41-additional-logs) and [client/server logs](debug-and-solve-issues.md#client-and-server-logs). If unresolved, post your question on [Slack](https://zenml.io/slack). ## How to Post on Slack -Include the following information in your query: +Provide the following information for effective troubleshooting: ### 1. System Information -Run the command below in your terminal and attach the output: - +Run the command to gather system info: ```shell zenml info -a -s ``` - For specific package issues, use: - ```shell zenml info -p ``` -### 2. What Happened? -Briefly describe: -- Your goal -- Expected outcome -- Actual outcome +### 2. Describe the Issue +- What were you trying to achieve? +- What did you expect vs. what actually happened? -### 3. How to Reproduce the Error? -Provide step-by-step instructions to replicate the error. +### 3. Steps to Reproduce +Outline the steps to reproduce the error, either in text or video format. ### 4. Relevant Log Output -Attach relevant logs and full error tracebacks. Include outputs from: - +Attach relevant logs and error tracebacks. Include outputs from: ```shell zenml status zenml stack describe ``` +For orchestrator logs, provide relevant pod logs if applicable. -For orchestrator logs, include the relevant pod logs. If default logs are insufficient, increase verbosity: - +#### 4.1 Additional Logs +If default logs are insufficient, adjust logging verbosity: ```shell export ZENML_LOGGING_VERBOSITY=DEBUG ``` +Refer to documentation for setting environment variables on your OS. ## Client and Server Logs -To view server logs, run: - +To view server logs: ```shell zenml logs ``` -## Most Common Errors - -### 1. Error Initializing REST Store -Occurs as: - +## Common Errors +### Error Initializing REST Store +Occurs when the local ZenML server is not running after a restart: ```bash -RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237': ... +RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237'... ``` -**Solution:** Re-run `zenml login --local` after each machine restart. - -### 2. Column 'step_configuration' Cannot Be Null -Occurs as: +Run `zenml login --local` after each restart. +### Column 'step_configuration' Cannot Be Null +This error indicates a configuration string is too long: ```bash sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") ``` -**Solution:** Ensure step configurations are within the character limit. - -### 3. 'NoneType' Object Has No Attribute 'name' -Occurs when a required stack component is missing: +### 'NoneType' Object Has No Attribute 'Name' +This error occurs when a required stack component is missing: ```shell AttributeError: 'NoneType' object has no attribute 'name' ``` -**Solution:** Register the missing component, e.g., an experiment tracker: - +To resolve, register the necessary component: ```shell zenml experiment-tracker register mlflow_tracker --flavor=mlflow zenml stack update -e mlflow_tracker ``` -This guide aims to streamline the debugging process in ZenML and enhance the support experience. +This guide aims to streamline the debugging process for ZenML users, ensuring efficient resolution of issues. ================================================== === File: docs/book/how-to/project-setup-and-management/interact-with-secrets.md === -# ZenML Secrets Management Documentation Summary - -## Overview -ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, identified by a **name** for easy reference in pipelines and stacks. +### Summary of ZenML Secrets Documentation -## Creating Secrets +#### What is a ZenML Secret? +ZenML secrets are collections of **key-value pairs** securely stored in the ZenML secrets store, identified by a **name** for easy retrieval in pipelines and stacks. -### CLI Method -To create a secret named `` with key-value pairs, use: +#### Creating a Secret +**CLI Method:** +To create a secret named `` with key-value pairs: ```shell -zenml secret create \ - --= \ - --= +zenml secret create --= --= ``` Alternatively, use JSON or YAML format: ```shell @@ -9834,26 +9773,27 @@ For large values or special characters, read from a file: zenml secret create --key=@path/to/file.txt ``` -### Python SDK Method -Using the ZenML client API: +**Python SDK Method:** ```python from zenml.client import Client client = Client() -client.create_secret( - name="my_secret", - values={"username": "admin", "password": "abc123"} -) +client.create_secret(name="my_secret", values={"username": "admin", "password": "abc123"}) +``` + +#### Secret Management Commands +Use CLI commands to list, update, and delete secrets. For interactive registration of missing secrets in a stack: +```shell +zenml stack register-secrets [] ``` -## Secret Scope -Secrets can be scoped to a user, ensuring accessibility only to that user: +#### Scoping Secrets +Secrets can be scoped to a user, ensuring access control: ```shell zenml secret create --scope user --= ``` -## Accessing Secrets -### Referencing Secrets +#### Accessing Registered Secrets To reference secrets in stack components, use: ```shell {{.}} @@ -9864,13 +9804,13 @@ zenml secret create mlflow_secret --username=admin --password=abc123 zenml experiment-tracker register mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ``` -### Validation Levels -Set the environment variable `ZENML_SECRET_VALIDATION_LEVEL` to control validation: +#### Secret Validation Levels +Control secret validation with the environment variable `ZENML_SECRET_VALIDATION_LEVEL`: - `NONE`: Disables validation. - `SECRET_EXISTS`: Validates existence of secrets only. -- `SECRET_AND_KEY_EXISTS`: Validates both secret and key existence (default). +- `SECRET_AND_KEY_EXISTS`: (default) Validates both existence of secrets and keys. -### Fetching Secret Values +#### Fetching Secret Values in Steps Access secrets in steps using the ZenML `Client` API: ```python from zenml import step @@ -9885,13 +9825,7 @@ def secret_loader() -> None: ) ``` -## Additional Commands -The CLI supports listing, updating, and deleting secrets. For interactive registration of missing secrets in a stack: -```shell -zenml stack register-secrets [] -``` - -For more details, refer to the [CLI guide](https://sdkdocs.zenml.io/latest/cli/#zenml.cli--secrets-management) and the [Client API reference](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/). +This documentation provides essential commands and methods for managing secrets in ZenML, ensuring secure handling of sensitive information in machine learning workflows. ================================================== @@ -9902,40 +9836,45 @@ For more details, refer to the [CLI guide](https://sdkdocs.zenml.io/latest/cli/# This section outlines the essential steps for setting up and managing ZenML projects. ## Key Steps for Project Setup: +1. **Installation**: Install ZenML using pip: + ```bash + pip install zenml + ``` -1. **Installation**: - - Install ZenML via pip: - ```bash - pip install zenml - ``` +2. **Initialize a Project**: Create a new ZenML project: + ```bash + zenml init + ``` -2. **Creating a Project**: - - Use the command: - ```bash - zenml init - ``` +3. **Configure Stack**: Set up a stack by selecting components (e.g., orchestrators, artifact stores): + ```bash + zenml stack register my_stack --orchestrator= --artifact-store= + ``` -3. **Configuration**: - - Configure your project settings, including specifying the backend and orchestrator. +4. **Create Pipelines**: Define pipelines using decorators: + ```python + @pipeline + def my_pipeline(): + step1 = step1_op() + step2 = step2_op(step1) + ``` -4. **Version Control**: - - Utilize Git for version control to track changes and collaborate effectively. +5. **Run Pipelines**: Execute the pipeline: + ```bash + zenml pipeline run my_pipeline + ``` ## Project Management: +- **Version Control**: Use Git for versioning your ZenML projects. +- **Environment Management**: Utilize virtual environments to manage dependencies. +- **Documentation**: Maintain clear documentation for project structure and components. -- **Pipeline Management**: - - Define and manage pipelines using decorators and configuration files. - -- **Artifact Tracking**: - - Use ZenML's built-in tracking to monitor data and model artifacts throughout the pipeline. - -- **Environment Management**: - - Set up different environments for development, testing, and production to ensure consistency. +## Best Practices: +- Regularly update dependencies. +- Use consistent naming conventions for pipelines and stacks. +- Monitor and log pipeline executions for troubleshooting. -- **Documentation**: - - Maintain clear documentation of project structure, dependencies, and workflows for team collaboration. - -This concise guide provides the foundational steps and considerations for effectively setting up and managing ZenML projects. +This summary provides a concise overview of the project setup and management processes within ZenML, ensuring critical information is retained for effective understanding and implementation. ================================================== @@ -9943,44 +9882,43 @@ This concise guide provides the foundational steps and considerations for effect # Access Management and Roles in ZenML -This guide outlines user roles and access management in ZenML, essential for project security and efficiency. +## Overview +This guide outlines user roles and access management in ZenML, emphasizing security and efficiency. ## Typical Roles in an ML Project -Common roles include: - **Data Scientists**: Develop and run pipelines. -- **MLOps Platform Engineers**: Manage infrastructure. +- **MLOps Platform Engineers**: Manage infrastructure and stack components. - **Project Owners**: Oversee ZenML deployment and user access. -Roles can vary, but responsibilities remain similar. +Roles may vary in your team but can be aligned with these responsibilities. -**Note**: Create roles in ZenML Pro with specific permissions for Users or Teams. [Free trial available](https://cloud.zenml.io/). +### Role Creation +You can create roles in ZenML Pro with specific permissions and assign them to users or teams. [Sign up for a free trial](https://cloud.zenml.io/). ## Service Connectors -Service connectors integrate cloud services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors due to their infrastructure knowledge. +Service connectors integrate external cloud services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors, while other team members can use them without accessing sensitive credentials. -### Example Permissions: -- **Data Scientist**: Can use connectors to create stack components and run pipelines but cannot create, update, or delete connectors or access their credentials. -- **MLOps Platform Engineer**: Can create, update, delete connectors, and read their secret values. +### Example Permissions +- **Data Scientist Role**: Can use connectors to create stack components and run pipelines but cannot create, update, or delete connectors or access their credentials. +- **MLOps Platform Engineer Role**: Has full permissions to manage connectors and access secret values. -**Note**: RBAC features are available only in ZenML Pro. Learn more about roles [here](../../../getting-started/zenml-pro/roles.md). +### Note +RBAC features are available in ZenML Pro. Learn more about roles [here](../../../getting-started/zenml-pro/roles.md). -## Server Upgrade Responsibilities -Project Owners usually decide on server upgrades after consulting teams. MLOps Platform Engineers are responsible for: -- Backing up data. -- Ensuring no service disruption during upgrades. - -For detailed upgrade practices, refer to the [Best Practices for Upgrading ZenML Servers](../../../how-to/manage-zenml-server/best-practices-upgrading-zenml.md). +## Server Upgrade Responsibility +Project Owners decide on server upgrades, considering team requirements. MLOps Platform Engineers are responsible for executing upgrades, ensuring data backup, and minimizing service disruption. For multi-team environments, ZenML Pro supports [multi-tenancy](../../../getting-started/zenml-pro/tenants.md). ## Pipeline Migration and Maintenance -Data Scientists own pipeline code, while Platform Engineers ensure compatibility with new ZenML versions. They should set up safe testing environments and perform staged upgrades. Data Scientists should review release notes and migration guides. +Data Scientists own pipeline code, while Platform Engineers ensure compatibility with new ZenML versions. Both should review release notes and migration guides during upgrades. ## Best Practices for Access Management -- **Regular Audits**: Review user access periodically. +- **Regular Audits**: Review user access and permissions periodically. - **Role-Based Access Control (RBAC)**: Streamline permission management. - **Least Privilege**: Grant minimal necessary permissions. -- **Documentation**: Keep clear records of roles and access policies. +- **Documentation**: Maintain clear records of roles and access policies. -**Note**: RBAC and permission assignment are exclusive to ZenML Pro users. +### Note +RBAC and permission management are features of ZenML Pro. By adhering to these guidelines, you can maintain a secure and collaborative ZenML environment. @@ -9991,85 +9929,77 @@ By adhering to these guidelines, you can maintain a secure and collaborative Zen # Shared Libraries and Logic for Teams ## Overview -Teams can enhance collaboration and standardization by sharing code libraries. This guide focuses on two key aspects of sharing code with ZenML: what can be shared and how to distribute shared components. +This guide outlines how teams can share code and libraries using ZenML to enhance collaboration, standardization, and robustness across projects. ## What Can Be Shared -ZenML supports several types of custom components for sharing: +ZenML supports sharing various custom components: ### Custom Flavors -- **Definition**: Special integrations not built into ZenML. -- **Implementation Steps**: - 1. Create in a shared repository. - 2. Implement as per ZenML documentation. - 3. Register using ZenML CLI: - ```bash - zenml artifact-store flavor register - ``` +- Create a custom flavor in a shared repository. +- Implement it as per ZenML documentation. +- Register using the ZenML CLI: + ```bash + zenml artifact-store flavor register + ``` ### Custom Steps -- **Definition**: Shareable components created in a separate repository, referenced like Python modules. +- Create and share custom steps via a separate repository, referenced like standard Python modules. ### Custom Materializers -- **Definition**: Common components for sharing. -- **Implementation Steps**: - 1. Create in a shared repository. - 2. Implement as per ZenML documentation. - 3. Import and use in projects. +- Develop a custom materializer in a shared repository. +- Implement as described in ZenML documentation, allowing team members to import and use it. ## How to Distribute Shared Components + ### Shared Private Wheels -- **Benefits**: - - Easy installation via pip. - - Simplified version and dependency management. - - Privacy through internal hosting. - -- **Setup Steps**: +A method for internal distribution of Python code: +- **Benefits**: Easy installation, version and dependency management, privacy. +- **Setup**: 1. Create a private PyPI server (e.g., AWS CodeArtifact). 2. Build code into wheel format. 3. Upload to the server. 4. Configure pip to use the private server. - 5. Install packages like public ones. + 5. Install packages using pip. ### Using Shared Libraries with `DockerSettings` -- **Definition**: Specify shared libraries for Docker images in pipelines. -- **Installation Methods**: - - **List of requirements**: - ```python - import os - from zenml.config import DockerSettings - from zenml import pipeline +To include shared libraries in Docker images: +- Specify requirements: + ```python + import os + from zenml.config import DockerSettings + from zenml import pipeline - docker_settings = DockerSettings( - requirements=["my-simple-package==0.1.0"], - environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} - ) + docker_settings = DockerSettings( + requirements=["my-simple-package==0.1.0"], + environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} + ) - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` - - **Using a requirements file**: - ```python - docker_settings = DockerSettings(requirements="/path/to/requirements.txt") + @pipeline(settings={"docker": docker_settings}) + def my_pipeline(...): + ... + ``` +- Alternatively, use a requirements file: + ```python + docker_settings = DockerSettings(requirements="/path/to/requirements.txt") - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` - - **Example `requirements.txt`**: - ``` - --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ - my-simple-package==0.1.0 - ``` + @pipeline(settings={"docker": docker_settings}) + def my_pipeline(...): + ... + ``` + The `requirements.txt` should include: + ``` + --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ + my-simple-package==0.1.0 + ``` ## Best Practices -- **Version Control**: Use systems like Git for collaboration. -- **Access Controls**: Implement security measures for private repositories. -- **Documentation**: Maintain clear, up-to-date documentation for shared components. -- **Regular Updates**: Keep libraries updated and communicate changes. -- **Continuous Integration**: Set up CI for quality assurance of shared libraries. +- **Version Control**: Use Git for shared code repositories to facilitate collaboration. +- **Access Controls**: Implement security measures for private PyPI servers. +- **Documentation**: Maintain clear and comprehensive documentation for shared components. +- **Regular Updates**: Keep libraries updated and communicate changes to the team. +- **Continuous Integration**: Set up CI for shared libraries to ensure quality and compatibility. -By following these guidelines, teams can effectively share code and libraries, ensuring consistency and accelerating development within the ZenML framework. +By following these guidelines, teams can effectively share code and libraries, enhancing collaboration and accelerating development within the ZenML framework. ================================================== @@ -10077,64 +10007,65 @@ By following these guidelines, teams can effectively share code and libraries, e # Organizing Stacks, Pipelines, Models, and Artifacts in ZenML -This guide outlines the organization of stacks, pipelines, models, and artifacts in ZenML, which are essential for effective MLOps. - -## Key Concepts +## Overview +ZenML architecture consists of **Stacks**, **Pipelines**, **Models**, and **Artifacts**, which are essential for organizing your ML workflow. -- **Stacks**: Configuration of tools and infrastructure for running pipelines, including components like orchestrators and artifact stores. Stacks allow seamless transitions between environments (local, staging, production) and can be reused across multiple pipelines, reducing configuration overhead and promoting reproducibility. +### Key Concepts: +- **Stacks**: Configuration of tools and infrastructure for running pipelines. Composed of components like orchestrators and artifact stores, stacks enable seamless transitions between environments (local, staging, production) and promote reproducibility. + +- **Pipelines**: Series of steps representing tasks in the ML workflow (e.g., data preparation, model training). Modular pipelines allow independent execution and easier management. -- **Pipelines**: Series of steps representing specific tasks in the ML workflow, such as data preparation and model training. It’s beneficial to separate pipelines by task (e.g., training vs. inference) for modularity, easier management, and independent execution. +- **Models**: Collections of related pipelines, artifacts, and metadata. Models facilitate data transfer between pipelines and help manage versions and stages. -- **Models**: Collections of related pipelines, artifacts, and metadata that represent a specific project. Models facilitate data transfer between pipelines (e.g., from training to inference) and can be managed through the Model Control Plane. +- **Artifacts**: Outputs of pipeline steps that can be tracked and reused. Artifacts maintain a clear history of data and model versions. -- **Artifacts**: Outputs of pipeline steps that are tracked and reused across pipelines. Artifacts should be named for easy identification, and each pipeline run generates a new version for traceability. +## Stack Management +- A single stack can support multiple pipelines, reducing configuration overhead and ensuring a consistent execution environment. +- For detailed stack management, refer to the [Managing Stacks and Components](../../infrastructure-deployment/stack-deployment/README.md) guide. ## Organizing Pipelines, Models, and Artifacts - ### Pipelines -- Separate pipelines for different tasks enhance modularity and management. -- Independent execution allows for focused testing and development. +- Separate pipelines for different tasks (e.g., training vs. inference) enhance modularity and manageability. +- Benefits include independent execution, easier code management, and improved organization of runs. ### Models -- Use models to connect related pipelines and manage artifacts. -- The Model Control Plane helps track model versions and stages. +- Use Models to connect related pipelines and artifacts. They help in transferring trained models between pipelines. +- The Model Control Plane allows version management and stage assignments for models. ### Artifacts -- Track and reuse artifacts across pipelines, ensuring clear history and traceability. -- Log metadata for better organization and visibility. +- Artifacts should be named for easy identification and reuse. Each pipeline run generates a new artifact version, ensuring traceability. +- Artifacts can be linked to Models for better organization. ## Example Workflow - -1. **Team Setup**: Two team members (Bob and Alice) create three pipelines: feature engineering, model training, and inference. -2. **Stack Configuration**: They use a shared `default` stack for local testing, allowing quick iterations. -3. **Artifact Management**: Bob’s training pipeline produces model artifacts, which Alice’s inference pipeline uses. -4. **Model Control**: They utilize the Model Control Plane to manage model versions and promote the best performing models for inference. +1. Team members create separate pipelines for feature engineering, training, and inference. +2. They use a shared stack for local testing, allowing rapid iteration. +3. The training pipeline produces model artifacts that the inference pipeline consumes. +4. The Model Control Plane tracks model versions, enabling easy comparisons and promotions to production. ## Rules of Thumb - ### Models -- One model per distinct ML use-case. -- Group related pipelines and artifacts together. -- Use the Model Control Plane for version management. +- One Model per ML use-case. +- Group related pipelines and artifacts within a Model. +- Manage versions and stages using the Model Control Plane. ### Stacks -- Maintain separate stacks for different environments. -- Share production and staging stacks for consistency. +- Maintain distinct stacks for different environments. +- Share production and staging stacks across teams. - Keep local stacks simple for rapid development. ### Naming and Organization - Use consistent naming conventions. - Leverage tags for resource organization. -- Document configurations and dependencies. +- Document stack configurations and dependencies. - Ensure pipeline code is modular and reusable. -Following these guidelines will help maintain a scalable MLOps workflow as your project evolves. +Following these guidelines will help maintain an efficient and scalable MLOps workflow in ZenML. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md === -It seems that the documentation text you provided is incomplete or missing. Please provide the full text that you would like summarized, and I will be happy to assist you! +It seems there is no documentation text provided for summarization. Please provide the text you would like summarized, and I'll be happy to assist! ================================================== @@ -10142,21 +10073,21 @@ It seems that the documentation text you provided is incomplete or missing. Plea ### Creating Your Own ZenML Template -Creating a ZenML template standardizes and shares ML workflows across projects. ZenML utilizes [Copier](https://copier.readthedocs.io/en/stable/) for template management, allowing project generation from templates. Follow these steps to create your ZenML template: +Creating a ZenML template standardizes and shares ML workflows across projects. ZenML utilizes [Copier](https://copier.readthedocs.io/en/stable/) for managing project templates. Follow these steps to create your own template: 1. **Create a Repository**: Set up a new repository to store your template's code and configuration files. - -2. **Define ML Workflows**: Use existing ZenML templates (e.g., [starter template](https://github.com/zenml-io/template-starter)) as a reference to define your steps and pipelines. + +2. **Define ML Workflows**: Use existing ZenML templates (e.g., [starter template](https://github.com/zenml-io/template-starter)) as a base to define your ML steps and pipelines. -3. **Create `copier.yml`**: This file defines template parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. +3. **Create `copier.yml`**: This file defines the template's parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. -4. **Test Your Template**: Use the `copier` command to generate a new project: +4. **Test Your Template**: Use the `copier` command to generate a new project from your template: ```bash copier copy https://github.com/your-username/your-template.git your-project ``` -5. **Use Your Template with ZenML**: Initialize your ZenML project with your template: +5. **Use Your Template with ZenML**: Initialize a ZenML project with your template: ```bash zenml init --template https://github.com/your-username/your-template.git @@ -10168,7 +10099,9 @@ Creating a ZenML template standardizes and shares ML workflows across projects. zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0 ``` -Keep your template updated with best practices. For practical guidance, install the `e2e_batch` template using: +### Example for Setting Up `e2e_batch` Template + +To follow along with the documentation using the `e2e_batch` template, run: ```bash mkdir e2e_batch @@ -10176,7 +10109,8 @@ cd e2e_batch zenml init --template e2e_batch --template-with-defaults ``` -This setup will help you follow the documentation effectively in your local environment. +### Note +Keep your template updated with best practices and changes in ML workflows. The [Production Guide](../../../../user-guide/production-guide/README.md) is based on the `E2E Batch` project template, which is recommended for installation. ================================================== @@ -10184,27 +10118,27 @@ This setup will help you follow the documentation effectively in your local envi ### ZenML Project Templates Overview -ZenML offers project templates to help users quickly understand and build ML pipelines. These templates include essential steps, pipelines, and a user-friendly CLI. +ZenML provides project templates to help users quickly understand the framework and build ML pipelines. These templates cover major use cases and include a simple CLI for ease of use. #### Available Project Templates | Project Template [Short name] | Tags | Description | |-------------------------------|------|-------------| -| [Starter template](https://github.com/zenml-io/template-starter) [starter] | basic, scikit-learn | Provides foundational ML components: parameterized steps, a model training pipeline, and a simple CLI, centered around a versatile scikit-learn model training use case. | -| [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [e2e_batch] | etl, hp-tuning, model-promotion, drift-detection, batch-prediction, scikit-learn | A comprehensive template featuring two pipelines: data loading, splitting, preprocessing, hyperparameter tuning, model training/evaluation, production promotion, data drift detection, and batch inference. | -| [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [nlp] | nlp, hp-tuning, model-promotion, training, pytorch, gradio, huggingface | A straightforward NLP training pipeline covering tokenization, training, hyperparameter tuning, evaluation, and deployment for BERT or GPT-2 models, with local testing using Gradio. | +| [Starter template](https://github.com/zenml-io/template-starter) [code: `starter`] | `basic`, `scikit-learn` | Basic ML components for starting with ZenML, including parameterized steps, a model training pipeline, and a simple CLI, using scikit-learn. | +| [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [code: `e2e_batch`] | `etl`, `hp-tuning`, `model-promotion`, `drift-detection`, `batch-prediction`, `scikit-learn` | A comprehensive template with two pipelines covering data loading, preprocessing, hyperparameter tuning, model training, evaluation, promotion, drift detection, and batch inference. | +| [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [code: `nlp`] | `nlp`, `hp-tuning`, `model-promotion`, `training`, `pytorch`, `gradio`, `huggingface` | A simple NLP pipeline for tokenization, training, hyperparameter tuning, evaluation, and deployment of BERT or GPT-2 models, tested locally with Gradio. | #### Using a Project Template -To utilize the templates, install ZenML with the templates extra: +To use the templates, install ZenML with the templates extras: ```bash pip install zenml[templates] ``` -**Note:** These templates differ from 'Run Templates' used for triggering pipelines. More information on Run Templates can be found [here](https://docs.zenml.io/how-to/trigger-pipelines). +**Note:** These templates differ from 'Run Templates' used for triggering pipelines, which can be explored [here](https://docs.zenml.io/how-to/trigger-pipelines). -To generate a project from a template, use: +To generate a project from a template, use the `zenml init` command with the `--template` flag: ```bash zenml init --template @@ -10218,9 +10152,7 @@ zenml init --template --template-with-defaults # Example: zenml init --template e2e_batch --template-with-defaults ``` -#### Collaboration Invitation - -ZenML invites users with personal projects to contribute templates for better understanding and enhancement of real-world MLOps scenarios. Interested users can join the [ZenML Slack](https://zenml.io/slack/) for collaboration. +ZenML invites collaboration for new project templates. Interested users can join their [Slack](https://zenml.io/slack/) for discussions. ================================================== @@ -10229,7 +10161,7 @@ ZenML invites users with personal projects to contribute templates for better un ### Recommended Repository Structure and Best Practices for ZenML #### Project Structure -A suggested structure for ZenML projects is as follows: +A recommended structure for ZenML projects is as follows: ```markdown . @@ -10240,11 +10172,13 @@ A suggested structure for ZenML projects is as follows: │ │ ├── loader_step.py │ │ └── requirements.txt (optional) │ └── training_step +│ └── ... ├── pipelines │ ├── training_pipeline │ │ ├── training_pipeline.py │ │ └── requirements.txt (optional) │ └── deployment_pipeline +│ └── ... ├── notebooks │ └── *.ipynb ├── requirements.txt @@ -10252,12 +10186,11 @@ A suggested structure for ZenML projects is as follows: └── run.py ``` -- **Steps and Pipelines**: Organize your steps and pipelines in separate folders. For simpler projects, steps can remain at the top level. -- **Code Repository**: Register your repository to track code versions and speed up Docker image builds. +- **Steps**: Store each step in separate Python files to manage utils and dependencies easily. +- **Pipelines**: Keep pipelines in separate files. Avoid naming pipelines or instances "pipeline" to prevent conflicts with the ZenML decorator. -#### Steps -- Store each step in separate Python files to manage utils, dependencies, and Dockerfiles independently. -- Use the `logging` module for logging, which will be recorded in the ZenML dashboard. +#### Logging +Use the `logging` module to capture logs, which will be recorded in the artifact store: ```python from zenml.logger import get_logger @@ -10269,26 +10202,22 @@ def training_data_loader(): logger.info("My logs") ``` -#### Pipelines -- Keep pipelines in separate Python files and separate execution from definition. -- Avoid naming pipelines "pipeline" to prevent conflicts with the imported `pipeline` decorator. - -#### .dockerignore -- Use `.dockerignore` to exclude unnecessary files from Docker images, reducing size and build time. - -#### Dockerfile (optional) -- ZenML uses a default Docker image. You can provide your own `Dockerfile` to customize builds. +#### Docker Configuration +- **.dockerignore**: Exclude unnecessary files to optimize Docker image size and build time. +- **Dockerfile**: ZenML uses a default Docker image, but you can customize it with your own `Dockerfile`. #### Notebooks -- Organize all Jupyter notebooks in a dedicated folder. +Organize all Jupyter notebooks in a dedicated folder. -#### .zen -- Run `zenml init` to define the project scope, which is crucial for resolving import paths and storing configurations. +#### ZenML Initialization +Run `zenml init` at the project root to define the project scope, which helps with import paths and configuration storage. This is especially important for projects using Jupyter notebooks. #### run.py -- Place your pipeline runners in the root directory to ensure correct import resolution and define the implicit source's root if `.zen` is not present. +Place the pipeline runner in the repository root to ensure correct import resolution. If no `.zen` file is defined, this file also establishes the implicit source's root. -This structure and these practices will help maintain clarity and efficiency in your ZenML projects. +### Additional Notes +- Registering your repository can help ZenML track code versions and speed up Docker image builds. +- Ensure all import paths are relative to the source's root. ================================================== @@ -10296,80 +10225,105 @@ This structure and these practices will help maintain clarity and efficiency in ### Summary of ZenML Code Repository Integration -**Overview**: ZenML allows you to connect your code repository (e.g., GitHub, GitLab) to track code versions and optimize Docker image builds by avoiding unnecessary rebuilds when source files change. +**Overview**: Connecting a Git repository to ZenML allows tracking of code versions and speeds up Docker image builds by avoiding unnecessary rebuilds when source code changes. #### Registering a Code Repository 1. **Install Integration**: - ```bash + ```shell zenml integration install ``` 2. **Register Repository**: - ```bash + ```shell zenml code-repository register --type= [--CODE_REPOSITORY_OPTIONS] ``` #### Available Implementations -- **GitHub**: - - Install integration: - ```bash - zenml integration install github - ``` - - Register repository: - ```bash - zenml code-repository register --type=github \ - --url= --owner= --repository= \ - --token= - ``` - - **Token Management**: - Use ZenML's secret management for secure token storage: - ```bash - zenml secret create github_secret --pa_token= - zenml code-repository register ... --token={{github_secret.pa_token}} - ``` +ZenML supports built-in implementations for GitHub and GitLab, as well as custom repositories. -- **GitLab**: - - Install integration: - ```bash - zenml integration install gitlab - ``` - - Register repository: - ```bash - zenml code-repository register --type=gitlab \ - --url= --group= --project= \ - --token= - ``` - - **Token Management**: - Similar to GitHub: - ```bash - zenml secret create gitlab_secret --pa_token= - zenml code-repository register ... --token={{gitlab_secret.pa_token}} - ``` +##### GitHub +1. **Install Integration**: + ```shell + zenml integration install github + ``` + +2. **Register Repository**: + ```shell + zenml code-repository register --type=github \ + --url= --owner= --repository= \ + --token= + ``` + + - **Parameters**: + - ``: Name of the repository. + - ``: Repository owner. + - ``: Personal Access Token. + - ``: Defaults to `https://github.com` (use for GitHub Enterprise). -#### Developing a Custom Code Repository + - **Secure Token Storage**: + ```shell + zenml secret create github_secret --pa_token= + zenml code-repository register ... --token={{github_secret.pa_token}} + ``` + +##### GitLab +1. **Install Integration**: + ```shell + zenml integration install gitlab + ``` + +2. **Register Repository**: + ```shell + zenml code-repository register --type=gitlab \ + --url= --group= --project= \ + --token= + ``` + + - **Parameters**: + - ``: Project group. + - ``: Project name. + - ``: Personal Access Token. + - ``: Defaults to `https://gitlab.com`. + + - **Secure Token Storage**: + ```shell + zenml secret create gitlab_secret --pa_token= + zenml code-repository register ... --token={{gitlab_secret.pa_token}} + ``` + +#### Custom Code Repository To implement a custom repository: -1. Subclass `zenml.code_repositories.BaseCodeRepository` and implement the following methods: - - `login()` - - `download_files(commit: str, directory: str, repo_sub_directory: Optional[str])` - - `get_local_context(path: str)` +1. **Subclass `BaseCodeRepository`**: + ```python + class BaseCodeRepository(ABC): + @abstractmethod + def login(self) -> None: + pass -2. Register the custom repository: - ```bash + @abstractmethod + def download_files(self, commit: str, directory: str, repo_sub_directory: Optional[str]) -> None: + pass + + @abstractmethod + def get_local_context(self, path: str) -> Optional["LocalRepositoryContext"]: + pass + ``` + +2. **Register Custom Repository**: + ```shell zenml code-repository register --type=custom --source=my_module.MyRepositoryClass [--CODE_REPOSITORY_OPTIONS] ``` -This integration enhances pipeline tracking and efficiency by linking code changes directly to ZenML's execution environment. +This integration allows ZenML to track code changes and commit hashes for each pipeline run, enhancing development efficiency. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/README.md === -# Setting up a Well-Architected ZenML Project - -This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration. +# Setting Up a Well-Architected ZenML Project -## Importance of a Well-Architected Project -A well-architected ZenML project is essential for efficient machine learning operations (MLOps), providing a solid foundation for developing, deploying, and maintaining ML models. +## Overview +This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration, which are essential for successful machine learning operations (MLOps). ## Key Components @@ -10379,13 +10333,12 @@ A well-architected ZenML project is essential for efficient machine learning ope - Refer to the [Set up repository guide](./set-up-repository.md) for details. ### Version Control and Collaboration -- Integrate with version control systems like Git for: - - Faster pipeline builds using shared images. - - Easy tracking of changes and team collaboration. +- Integrate with Git for efficient code management and collaboration. +- Benefits include faster pipeline builds and easy change tracking. - Learn more in the [Set up a repository guide](./set-up-repository.md). ### Stacks, Pipelines, Models, and Artifacts -- **Stacks**: Define infrastructure and tool configurations. +- **Stacks**: Infrastructure and tool configurations. - **Models**: Represent ML models and metadata. - **Pipelines**: Encapsulate ML workflows. - **Artifacts**: Track data and model outputs. @@ -10393,14 +10346,13 @@ A well-architected ZenML project is essential for efficient machine learning ope ### Access Management and Roles - Define roles (e.g., data scientists, MLOps engineers). -- Set up [service connectors](../../infrastructure-deployment/auth-management/README.md) and manage authorizations. +- Set up service connectors and manage authorizations. - Use [Teams in ZenML Pro](../../../getting-started/zenml-pro/teams.md) for role assignments. - Explore strategies in the [Access Management and Roles guide](../collaborate-with-team/access-management.md). ### Shared Components and Libraries -- Promote code reuse with: - - Custom flavors, steps, and materializers. - - Shared private wheels for internal use. +- Promote code reuse with shared components like custom flavors and steps. +- Use shared private wheels for internal distribution. - Learn about sharing code in the [Shared Libraries and Logic for Teams guide](../collaborate-with-team/shared-components-for-teams.md). ### Project Templates @@ -10409,64 +10361,53 @@ A well-architected ZenML project is essential for efficient machine learning ope ### Migration and Maintenance - Strategies for migrating legacy code and upgrading ZenML servers. -- Find best practices in the [Migration and Maintenance guide](../../advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md#upgrading-your-code). +- Best practices are detailed in the [Migration and Maintenance guide](../../advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md#upgrading-your-code). ## Getting Started -Explore the guides in this section for detailed information on project setup and management. Regularly review and refine your project structure and practices to adapt to evolving team needs. Following these guidelines will help create a robust and collaborative MLOps environment. +Explore the guides in this section to build your ZenML project. Regularly review and refine your project structure to adapt to your team's needs. Following these guidelines will help create a robust, scalable, and collaborative MLOps environment. ================================================== === File: docs/book/how-to/model-management-metrics/README.md === -# Model Management and Metrics in ZenML +### Model Management and Metrics in ZenML -This section details the management of machine learning models and the tracking of associated metrics within ZenML. +This section details the management of models and tracking of metrics within ZenML. -## Key Components +#### Key Components: -1. **Model Management**: - - ZenML provides tools for versioning, storing, and retrieving models. - - Supports integration with various model registries. +1. **Model Management**: + - ZenML facilitates versioning, deployment, and monitoring of machine learning models. + - Users can register models, track their lineage, and manage different versions effectively. 2. **Metrics Tracking**: - - Metrics can be logged during model training and evaluation. - - ZenML allows for custom metrics to be defined and tracked. - -3. **Pipeline Integration**: - - Models and metrics can be incorporated into pipelines for automated workflows. - - Pipelines can be configured to log metrics at different stages. - -4. **Visualization**: - - Metrics can be visualized using built-in tools or integrated with external visualization libraries. - -5. **Best Practices**: - - Regularly version models to maintain a clear history. - - Define and log relevant metrics to evaluate model performance effectively. + - Metrics can be logged during training and evaluation phases. + - ZenML supports integration with various metrics tracking tools (e.g., MLflow, TensorBoard). + - Users can define custom metrics and visualize them for better insights. -## Example Code Snippet +3. **Implementation**: + - Use decorators and context managers to log metrics automatically. + - Example code snippet for logging metrics: -```python -from zenml.pipelines import pipeline -from zenml.steps import step + ```python + from zenml.steps import step -@step -def train_model(): - # Training logic - pass + @step + def train_model(data): + # Training logic here + metrics = {"accuracy": 0.95} # Example metric + return metrics + ``` -@step -def log_metrics(): - # Log metrics here - pass +4. **Model Registry**: + - Models can be registered in a centralized registry for easy access and deployment. + - Supports tagging and categorization for better organization. -@pipeline -def model_pipeline(): - train = train_model() - log = log_metrics() - train >> log -``` +5. **Deployment**: + - Models can be deployed to various environments (e.g., cloud, on-premise). + - ZenML provides tools to facilitate continuous deployment and integration. -This concise overview provides essential information on model management and metrics tracking in ZenML, ensuring clarity and focus on critical aspects. +By leveraging these features, users can ensure effective model management and comprehensive tracking of performance metrics throughout the machine learning lifecycle. ================================================== @@ -10474,10 +10415,10 @@ This concise overview provides essential information on model management and met ### Summary: Attaching Metadata to Artifacts in ZenML -In ZenML, metadata enhances artifacts by providing context such as size, structure, and performance metrics. This metadata is viewable in the ZenML dashboard, aiding in artifact inspection and comparison across pipeline runs. +In ZenML, metadata enhances artifacts by providing context and details such as size, structure, and performance metrics. This metadata is viewable in the ZenML dashboard, aiding in artifact inspection and comparison across pipeline runs. #### Logging Metadata for Artifacts -Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact's name, version, or ID. Metadata can be any JSON-serializable value, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. +Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact name, version, or ID. The metadata can include any JSON-serializable value, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. **Example of Logging Metadata:** ```python @@ -10502,7 +10443,7 @@ def process_data_step(dataframe: pd.DataFrame) -> pd.DataFrame: #### Selecting the Artifact for Metadata Logging 1. **Using `infer_artifact`**: Automatically selects the output artifact of the step. 2. **Name and Version**: If both are provided, ZenML attaches metadata to the specified artifact version. -3. **Artifact Version ID**: Directly fetches and attaches metadata to the specific artifact version. +3. **Artifact Version ID**: Directly attaches metadata to the specific artifact version. #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: @@ -10513,10 +10454,10 @@ client = Client() artifact = client.get_artifact_version("my_artifact", "my_version") print(artifact.run_metadata["metadata_key"]) ``` -*Note: The returned value reflects the latest entry for the specified key.* +*Note: Fetching metadata by key returns the latest entry.* #### Grouping Metadata in the Dashboard -You can group metadata into cards by passing a dictionary of dictionaries in the `metadata` parameter. This organizes metadata into logical sections for better visualization. +You can group metadata into cards in the ZenML dashboard by passing a dictionary of dictionaries to the `metadata` parameter. This organizes metadata into logical sections. **Example of Grouping Metadata:** ```python @@ -10539,18 +10480,19 @@ log_metadata( artifact_version="version", ) ``` -In the ZenML dashboard, `model_metrics` and `data_details` will appear as separate cards with their respective key-value pairs. +In the dashboard, `model_metrics` and `data_details` will be displayed as separate cards. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-run.md === -### Summary: Attaching Metadata to a Run in ZenML +### Attach Metadata to a Run in ZenML -In ZenML, metadata can be logged to a pipeline run using the `log_metadata` function, which accepts a dictionary of key-value pairs. Values can be any JSON-serializable type, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. +In ZenML, you can log metadata to a pipeline run using the `log_metadata` function, which accepts a dictionary of key-value pairs. Values can be any JSON-serializable type, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Run -When logging metadata from within a pipeline step, the metadata key follows the `step_name::metadata_key` pattern, allowing the same key to be used across different steps during execution. + +When logging metadata from within a pipeline step, the metadata key follows the `step_name::metadata_key` pattern, allowing reuse of keys across different steps during execution. **Example: Logging Metadata in a Step** ```python @@ -10568,15 +10510,20 @@ def train_model(dataset: pd.DataFrame) -> Annotated[ classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... - # Log run-level metadata + # Log metadata at the run level log_metadata({ - "run_metrics": {"accuracy": accuracy, "precision": precision, "recall": recall} + "run_metrics": { + "accuracy": accuracy, + "precision": precision, + "recall": recall + } }) return classifier ``` -#### Manually Logging Metadata -Metadata can also be attached to a specific pipeline run post-execution using the run ID. +#### Manually Logging Metadata to a Pipeline Run + +You can also log metadata to a specific pipeline run using its run ID, useful for post-execution metrics. **Example: Manual Metadata Logging** ```python @@ -10589,6 +10536,7 @@ log_metadata( ``` #### Fetching Logged Metadata + To retrieve logged metadata, use the ZenML Client: **Example: Fetching Metadata** @@ -10601,7 +10549,7 @@ run = client.get_pipeline_run("run_id_name_or_prefix") print(run.run_metadata["metadata_key"]) ``` -**Note:** The fetched value for a specific key reflects the latest entry. +**Note:** When fetching metadata by key, the returned value reflects the latest entry. ================================================== @@ -10609,13 +10557,14 @@ print(run.run_metadata["metadata_key"]) ### Summary: Attaching Metadata to a Model in ZenML -ZenML enables logging of metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, and customer-specific details, facilitating better management and interpretation of model performance across versions. +ZenML enables logging metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, or customer-specific details, enhancing model management and performance interpretation across versions. #### Logging Metadata -To log metadata, use the `log_metadata` function, which allows attaching key-value pairs, including metrics and JSON-serializable values like `Uri`, `Path`, and `StorageSize`. +To log metadata for a model, use the `log_metadata` function, which allows attaching key-value pairs, including metrics and JSON-serializable values (e.g., `Uri`, `Path`, `StorageSize`). + +**Example:** -**Example: Logging Metadata for a Model** ```python from typing import Annotated import pandas as pd @@ -10625,6 +10574,7 @@ from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: + """Train a model and log metadata.""" classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... @@ -10640,18 +10590,20 @@ def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactCon ) return classifier ``` + In this example, metadata is associated with the model, summarizing various pipeline steps and artifacts. #### Selecting Models with `log_metadata` ZenML offers flexible options for attaching metadata to model versions: + 1. **Using `infer_model`**: Automatically infers the model from the step context. -2. **Model Name and Version**: Attaches metadata to a specified model version. -3. **Model Version ID**: Directly attaches metadata to a specific model version. +2. **Model Name and Version**: Attach metadata to a specific model version by providing both. +3. **Model Version ID**: Directly attach metadata using a specific model version ID. #### Fetching Logged Metadata -To retrieve attached metadata, use the ZenML Client: +Once metadata is logged, it can be retrieved using the ZenML Client: ```python from zenml.client import Client @@ -10661,7 +10613,8 @@ model = client.get_model_version("my_model", "my_version") print(model.run_metadata["metadata_key"]) ``` -**Note**: Fetching metadata by key returns the latest entry. + +**Note**: When fetching metadata by key, the returned value reflects the latest entry. ================================================== @@ -10669,10 +10622,9 @@ print(model.run_metadata["metadata_key"]) ### Grouping Metadata in the Dashboard -To group key-value pairs in the ZenML dashboard, pass a dictionary of dictionaries in the `metadata` parameter. This organizes metadata into cards, enhancing visualization and understanding. - -**Example:** +To organize metadata in the ZenML dashboard, you can pass a dictionary of dictionaries to the `metadata` parameter in the `log_metadata` function. This allows for logical grouping of metadata into separate cards, enhancing visualization and understanding. +#### Example Code: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize @@ -10694,7 +10646,7 @@ log_metadata( ) ``` -In the ZenML dashboard, "model_metrics" and "data_details" will be displayed as separate cards, each with their respective key-value pairs. +In the ZenML dashboard, the keys "model_metrics" and "data_details" will be displayed as separate cards, each containing their respective key-value pairs. ================================================== @@ -10702,31 +10654,30 @@ In the ZenML dashboard, "model_metrics" and "data_details" will be displayed as ### Summary of ZenML Metadata Tracking -ZenML supports special metadata types for capturing specific information: `Uri`, `Path`, `DType`, and `StorageSize`. Below is a concise example of how to use these types: +ZenML supports special metadata types to capture specific information, including `Uri`, `Path`, `DType`, and `StorageSize`. +**Example Usage:** ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize, DType, Uri, Path -log_metadata( - metadata={ - "dataset_source": Uri("gs://my-bucket/datasets/source.csv"), - "preprocessing_script": Path("/scripts/preprocess.py"), - "column_types": { - "age": DType("int"), - "income": DType("float"), - "score": DType("int") - }, - "processed_data_size": StorageSize(2500000) +log_metadata({ + "dataset_source": Uri("gs://my-bucket/datasets/source.csv"), + "preprocessing_script": Path("/scripts/preprocess.py"), + "column_types": { + "age": DType("int"), + "income": DType("float"), + "score": DType("int") }, -) + "processed_data_size": StorageSize(2500000) +}) ``` **Key Points:** -- `Uri`: Represents the dataset source URI. -- `Path`: Specifies the filesystem path to a script. -- `DType`: Describes data types of columns. -- `StorageSize`: Indicates the size of processed data in bytes. +- **Uri**: Represents the source URI of the dataset. +- **Path**: Specifies the filesystem path to a preprocessing script. +- **DType**: Describes the data types of specific columns. +- **StorageSize**: Indicates the size of processed data in bytes. These types standardize metadata format, ensuring consistent and interpretable logging. @@ -10734,9 +10685,9 @@ These types standardize metadata format, ensuring consistent and interpretable l === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md === -### Fetch Metadata During Pipeline Composition +### Fetching Metadata During Pipeline Composition -#### Pipeline Configuration Using `PipelineContext` +#### Pipeline Configuration with `PipelineContext` To access pipeline configuration during composition, use the `zenml.get_pipeline_context()` function to retrieve the `PipelineContext`. @@ -10755,31 +10706,28 @@ from zenml import get_pipeline_context, pipeline def my_pipeline(): context = get_pipeline_context() after = [] - search_steps_prefix = "hp_tuning_search_" - - for i, model_search_configuration in enumerate(context.extra["complex_parameter"]): - step_name = f"{search_steps_prefix}{i}" + for i, model_config in enumerate(context.extra["complex_parameter"]): + step_name = f"hp_tuning_search_{i}" cross_validation( - model_package=model_search_configuration[0], - model_class=model_search_configuration[1], + model_package=model_config[0], + model_class=model_config[1], id=step_name ) after.append(step_name) - - select_best_model(search_steps_prefix=search_steps_prefix, after=after) + select_best_model(search_steps_prefix="hp_tuning_search_", after=after) ``` -For more details on the attributes and methods available in `PipelineContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.pipelines.pipeline_context.PipelineContext). +For more details on `PipelineContext` attributes and methods, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.pipelines.pipeline_context.PipelineContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md === -### Accessing Meta Information in Real-Time within Your Pipeline +### Accessing Meta Information in Real-Time -#### Fetching Metadata within Steps +#### Fetch Metadata Within Steps -To access information about the currently running pipeline or step, use the `zenml.get_step_context()` function to obtain the `StepContext`: +To access information about the current pipeline or step, use the `zenml.get_step_context()` function to obtain the `StepContext`: ```python from zenml import step, get_step_context @@ -10792,7 +10740,7 @@ def my_step(): step_name = step_context.step_run.name ``` -You can also retrieve the output storage URI and the Materializer class used for saving outputs: +You can also retrieve the output storage URI and the associated Materializer class for saving outputs: ```python from zenml import step, get_step_context @@ -10800,11 +10748,11 @@ from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() - uri = step_context.get_output_artifact_uri() - materializer = step_context.get_output_materializer() + uri = step_context.get_output_artifact_uri() # Output storage URI + materializer = step_context.get_output_materializer() # Output Materializer ``` -For more details on the attributes and methods available in `StepContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). +For more details on `StepContext` attributes and methods, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). ================================================== @@ -10812,7 +10760,7 @@ For more details on the attributes and methods available in `StepContext`, refer ### Summary: Attaching Metadata to a Step in ZenML -In ZenML, metadata can be logged for a specific step using the `log_metadata` function, which accepts a dictionary of key-value pairs. This metadata can include any JSON-serializable values, such as custom classes like `Uri`, `Path`, `DType`, and `StorageSize`. +In ZenML, you can log metadata for a specific step using the `log_metadata` function, which accepts a dictionary of key-value pairs. This metadata can include any JSON-serializable values, such as custom classes (`Uri`, `Path`, `DType`, `StorageSize`). #### Logging Metadata Within a Step When `log_metadata` is called within a step, it attaches the metadata to the currently executing step and its pipeline run. This is useful for logging metrics available during execution. @@ -10829,35 +10777,27 @@ from zenml import step, log_metadata, ArtifactConfig def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... - - log_metadata(metadata={ - "evaluation_metrics": { - "accuracy": accuracy, - "precision": precision, - "recall": recall - } - }) + + log_metadata(metadata={"evaluation_metrics": {"accuracy": accuracy, "precision": precision, "recall": recall}}) return classifier ``` -**Note:** If a pipeline step execution is cached, the cached run will copy the original step's metadata, excluding any manually added metadata post-execution. +**Note:** If a pipeline execution is cached, the cached step run will copy the original metadata, but manually generated metadata post-execution will not be included. #### Manually Logging Metadata After Execution -Metadata can also be logged after a step's execution using identifiers for the pipeline, step, and run. +You can log metadata after a step's execution using identifiers for the pipeline, step, and run. **Example:** ```python from zenml import log_metadata log_metadata(metadata={"additional_info": {"a_number": 3}}, step_name="step_name", run_id_name_or_prefix="run_id_name_or_prefix") - -# or - +# or log_metadata(metadata={"additional_info": {"a_number": 3}}, step_id="step_id") ``` #### Fetching Logged Metadata -To retrieve logged metadata, use the ZenML Client: +To fetch logged metadata, use the ZenML Client: **Example:** ```python @@ -10865,11 +10805,10 @@ from zenml.client import Client client = Client() step = client.get_pipeline_run("pipeline_id").steps["step_name"] - print(step.run_metadata["metadata_key"]) ``` -**Note:** Fetching metadata by key will return the latest entry. +**Note:** Fetching metadata by key returns the latest entry. ================================================== @@ -10877,12 +10816,12 @@ print(step.run_metadata["metadata_key"]) # Tracking and Comparing Metrics and Metadata in ZenML -ZenML provides a unified `log_metadata` function to log and manage metrics and metadata across various entities such as models, artifacts, steps, and runs. +ZenML provides a unified method to log and manage metrics and metadata using the `log_metadata` function. This function allows logging across various entities such as models, artifacts, steps, and runs, with options for automatic logging for related entities. ## Logging Metadata -### Basic Usage -To log metadata within a step: +### Basic Use-Case +You can log metadata within a step as follows: ```python from zenml import step, log_metadata @@ -10891,10 +10830,11 @@ from zenml import step, log_metadata def my_step() -> ...: log_metadata(metadata={"accuracy": 0.91}) ``` -This logs the `accuracy` for the step and its pipeline run. -### Comprehensive Example -In a machine learning pipeline, you can log multiple types of metadata: +This logs the `accuracy` for the step, its pipeline run, and the model version if provided. + +### Real-World Example +Here’s a more detailed example in a machine learning pipeline: ```python from zenml import step, pipeline, log_metadata @@ -10922,52 +10862,55 @@ def telemetry_pipeline(): efficiency = process_engine_metrics() analyze_flight_telemetry(efficiency) ``` -The logged data can be visualized in the ZenML Pro dashboard. + +This data can be visualized in the ZenML Pro dashboard, specifically using the Experiment Comparison tool, which is currently in Alpha Preview. ## Visualizing and Comparing Metadata (Pro) -Once metadata is logged, you can use ZenML's Experiment Comparison tool to analyze and compare metrics across runs. This tool supports: +Once metadata is logged, you can analyze and compare metrics across different runs using the Experiment Comparison tool in the ZenML Pro dashboard. -1. **Table View**: Compare metadata with automatic change tracking. -2. **Parallel Coordinates Plot**: Visualize relationships between metrics. +### Comparison Views +The tool offers: +1. **Table View**: Compare metadata across runs with automatic change tracking. +2. **Parallel Coordinates Plot**: Visualize relationships between different metrics. You can compare up to 20 pipeline runs simultaneously, supporting any numerical metadata (`float` or `int`). ### Additional Use-Cases -The `log_metadata` function allows logging to specific entities (model, artifact, step, or run). For more details, refer to: -- Logging metadata to a step -- Logging metadata to a run -- Logging metadata to an artifact -- Logging metadata to a model +The `log_metadata` function supports various use-cases by specifying the target entity (e.g., model, artifact, step, or run). More details can be found in the following pages: +- Log metadata to a step +- Log metadata to a run +- Log metadata to an artifact +- Log metadata to a model -**Note**: Older methods like `log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata` are deprecated. Use `log_metadata` for all future implementations. +**Note**: Older methods for logging metadata (e.g., `log_model_metadata`, `log_artifact_metadata`, `log_step_metadata`) are deprecated. Use `log_metadata` for future implementations. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/promote-a-model.md === -# Promote a Model +# Model Promotion in ZenML -## Stages and Promotion -Model stages track the lifecycle of ZenML Model versions. A model can be promoted through the Dashboard, ZenML CLI, or Python SDK. Stages include: -- **staging**: Ready for production. -- **production**: Actively used in production. -- **latest**: Virtual stage for the most recent version; cannot be promoted to. -- **archived**: No longer relevant. +## Overview +ZenML allows the promotion of model versions through various stages in their lifecycle, providing metadata to identify the state of each version. The stages include: +- **staging**: Prepared for production. +- **production**: Actively running in production. +- **latest**: Represents the most recent version (not a promotable stage). +- **archived**: No longer relevant, indicating a model has moved out of other stages. -### Promotion Methods +## Promotion Methods -#### CLI Promotion -Use the CLI to promote a model version: +### CLI Promotion +Use the following command to promote a model version via the CLI: ```bash zenml model version update iris_logistic_regression --stage=... ``` -#### Cloud Dashboard Promotion -This feature is coming soon for direct promotion from the ZenML Pro dashboard. +### Cloud Dashboard Promotion +Promotion through the ZenML Pro dashboard will be available soon. -#### Python SDK Promotion -The most common method: +### Python SDK Promotion +The most common method for promoting models is through the Python SDK: ```python from zenml import Model from zenml.enums import ModelStages @@ -10980,7 +10923,7 @@ latest_model = Model(name=MODEL_NAME, version=ModelStages.LATEST) latest_model.set_stage(stage=ModelStages.STAGING) ``` -In a pipeline: +In a pipeline context, retrieve the model from the step context: ```python from zenml import get_step_context, step, pipeline from zenml.enums import ModelStages @@ -10996,7 +10939,7 @@ def train_and_promote_model(): ``` ## Fetching Model Versions by Stage -To load a specific model version by stage: +To load the appropriate model version by stage, specify the version: ```python from zenml import Model, step, pipeline @@ -11008,20 +10951,21 @@ def svc_trainer(...) -> ...: @pipeline(model=model) def training_pipeline(...): - # training happens here + # training logic here ``` +This configuration ensures that the specified model version is used throughout the pipeline. + ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/linking-model-binaries-data-to-models.md === # Linking Model Binaries/Data in ZenML -Model artifacts generated during pipeline runs can be linked to models in ZenML for lineage tracking and transparency in training, evaluation, and inference. +ZenML allows linking model artifacts generated during pipeline runs to models for lineage tracking and transparency in training, evaluation, and inference. ## Configuring the Model at Pipeline Level - -You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorator: +You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorators: ```python from zenml import Model, pipeline @@ -11036,8 +10980,7 @@ def my_pipeline(): This links all artifacts from the pipeline run to the specified model. ## Saving Intermediate Artifacts - -To save intermediate work, use the `save_artifact` utility function. If the step has the Model context configured, it will automatically link to it. +To save progress during training (e.g., epoch-based training), use the `save_artifact` utility. If the step has the Model context configured, it will automatically link to the model. ```python from zenml import step, Model @@ -11055,8 +10998,7 @@ def trainer(trn_dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactCon ``` ## Explicitly Linking Artifacts - -To link an artifact to a model outside of a step, use the `link_artifact_to_model` function. +To link an artifact to a model outside of a step, use the `link_artifact_to_model` function. You need the artifact and model configuration. ```python from zenml import step, Model, link_artifact_to_model, save_artifact @@ -11071,7 +11013,7 @@ existing_artifact = Client().get_artifact_version(name_id_or_prefix="existing_ar link_artifact_to_model(artifact_version_id=existing_artifact.id, model=Model(name="MyModel", version="0.2.42")) ``` -This allows for linking artifacts both within and outside of steps. +This documentation provides methods for linking model artifacts in ZenML, ensuring efficient tracking and management of model versions and their associated artifacts. ================================================== @@ -11079,27 +11021,29 @@ This allows for linking artifacts both within and outside of steps. ### Deleting Models in ZenML -Deleting a model or its specific version removes all links to artifacts and pipeline runs, along with all associated metadata. +**Overview**: Deleting a model or a specific version removes all links to artifacts, pipeline runs, and associated metadata. #### Deleting All Versions of a Model -- **CLI Command:** + +- **CLI Command**: ```shell zenml model delete ``` - -- **Python SDK:** + +- **Python SDK**: ```python from zenml.client import Client Client().delete_model() ``` #### Deleting a Specific Version of a Model -- **CLI Command:** + +- **CLI Command**: ```shell zenml model version delete ``` - -- **Python SDK:** + +- **Python SDK**: ```python from zenml.client import Client Client().delete_model_version() @@ -11113,9 +11057,9 @@ This documentation provides the necessary commands to delete models and their ve # Model Versions Overview -Model versions in ZenML allow tracking of different iterations during the machine learning training process, providing dashboard and API functionalities for the ML lifecycle. You can associate model versions with stages (e.g., production, staging) and link them to non-technical artifacts like datasets. Model versions are created automatically during training, but can also be explicitly named via the `version` argument in the `Model` object. +Model versions in ZenML allow tracking of different iterations during the machine learning training process, with dashboard and API support for the ML lifecycle. You can associate model versions with stages (e.g., production, staging) and link them to non-technical artifacts like datasets or business data. Model versions are created automatically during training, but can also be explicitly named via the `version` argument in the `Model` object. -## Explicit Naming of Model Versions +## Explicitly Naming Model Versions To explicitly name a model version: @@ -11133,11 +11077,11 @@ def training_pipeline(...): # training happens here ``` -If the specified version exists, it is automatically associated with the pipeline. +If the model version exists, it is automatically associated with the pipeline. ## Templated Naming for Model Versions -For semantic naming of model versions, use templates in the `version` and/or `name` arguments: +For semantic versioning, use templated names in the `version` and/or `name` arguments: ```python from zenml import Model, step, pipeline @@ -11153,17 +11097,17 @@ def training_pipeline(...): # training happens here ``` -This will generate unique names for each run, such as `experiment_with_phi_3_2024_08_30_12_42_53`. Standard substitutions available include `{date}` and `{time}`. +This will generate unique, readable names for each run, like `experiment_with_phi_3_2024_08_30_12_42_53`. Substitutions can be set at different levels (pipeline or step). ## Fetching Model Versions by Stage -Model versions can be assigned stages (e.g., `production`, `staging`) to facilitate fetching: +Assign stages to model versions (e.g., `production`, `staging`) for semantic retrieval: ```shell zenml model version update MODEL_NAME --stage=STAGE ``` -You can then fetch the model version by its stage: +You can then fetch the model version using its stage: ```python from zenml import Model, step, pipeline @@ -11193,41 +11137,34 @@ def svc_trainer(...) -> ...: ... ``` -ZenML tracks the version sequence, ensuring that new versions increment correctly: +This creates a new version, incrementing the sequence. For example: ```python -from zenml import Model - earlier_version = Model(name="my_model", version="really_good_version").number # == 5 updated_version = Model(name="my_model", version="even_better_version").number # == 6 ``` -This structure allows for organized version control throughout the ML lifecycle. +This ensures that each new model version is tracked correctly in the iteration sequence. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/connecting-artifacts-via-a-model.md === -### Structuring an MLOps Project - -This documentation outlines how to structure an MLOps project by connecting artifacts through pipelines. Key components include managing artifacts, models, and pipelines, which collectively inform project structure. - -#### Recommended Repository Structure -For best practices on repository structure in a ZenML MLOps project, refer to the [best practices section](../../project-setup-and-management/setting-up-a-project-repository/README.md). +### Summary: Structuring an MLOps Project -#### Pipeline Breakdown -An MLOps project typically consists of multiple pipelines: -- **Feature Engineering Pipeline**: Prepares raw data for training. -- **Training Pipeline**: Trains models using data from the feature engineering pipeline. -- **Inference Pipeline**: Runs batch predictions on the trained model. -- **Deployment Pipeline**: Deploys the trained model to a production environment. +#### Overview +An MLOps project typically consists of multiple pipelines that manage the flow of data and models. Key pipeline types include: +- **Feature Engineering Pipeline**: Prepares raw data. +- **Training Pipeline**: Trains models using prepared data. +- **Inference Pipeline**: Runs predictions using trained models. +- **Deployment Pipeline**: Deploys models to production. -The structure of these pipelines can vary based on project requirements, and information (artifacts, models, metadata) often needs to be shared between them. +The structure of these pipelines can vary based on project requirements, and they often need to share artifacts, models, and metadata. -#### Common Patterns for Artifact Exchange +#### Artifact Exchange Patterns -1. **Artifact Exchange via `Client`**: - - Use the ZenML Client to facilitate the transfer of datasets between pipelines. +1. **Artifact Exchange via Client** + - Use the ZenML Client to exchange artifacts between pipelines. - Example: ```python from zenml import pipeline @@ -11245,11 +11182,10 @@ The structure of these pipelines can vary based on project requirements, and inf sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` + - Note: `train_data` and `test_data` are references, not materialized in memory. - **Note**: Artifacts are referenced, not materialized in memory during the pipeline function. - -2. **Artifact Exchange via `Model`**: - - Use ZenML Models as references for artifacts, allowing pipelines to operate independently. +2. **Artifact Exchange via Model** + - Use ZenML Model as a reference point for artifacts. - Example: ```python from zenml import step, get_step_context @@ -11260,27 +11196,19 @@ The structure of these pipelines can vary based on project requirements, and inf predictions = pd.Series(model.predict(data)) return predictions ``` - - - Alternatively, resolve the artifact at the pipeline level: + - Alternatively, resolve artifacts at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages - @step - def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: - return pd.Series(model.predict(data)) - @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): - model = get_pipeline_context().model - inference_data = load_data() - predict(model=model.get_model_artifact("trained_model"), data=inference_data) - - if __name__ == "__main__": - do_predictions() + model = get_pipeline_context().model.get_model_artifact("trained_model") + predict(model=model, data=load_data()) ``` -Both artifact exchange methods are valid; the choice depends on user preference. +#### Conclusion +Choosing between artifact exchange methods depends on project needs and personal preference. Both methods effectively facilitate the sharing of models and artifacts across pipelines. ================================================== @@ -11291,7 +11219,7 @@ Both artifact exchange methods are valid; the choice depends on user preference. ## Loading a Model in Code ### 1. Load the Active Model in a Pipeline -You can access the active model and its metadata within a pipeline using the following code: +To load the active model in a ZenML pipeline, you can access model metadata and associated artifacts as follows: ```python from zenml import step, pipeline, get_step_context, Model @@ -11309,7 +11237,7 @@ def my_step(): ``` ### 2. Load Any Model via the Client -You can also load a model using the `Client` class: +You can also load any model using the `Client`: ```python from zenml import step @@ -11327,7 +11255,7 @@ def model_evaluator_step(): staging_zenml_model = None ``` -This documentation outlines two methods for loading models in ZenML: through the active model in a pipeline and via the `Client` class. Each method provides access to model metadata and artifacts. +This documentation provides methods to load models in ZenML, either through active pipeline context or using the Client API. ================================================== @@ -11335,22 +11263,21 @@ This documentation outlines two methods for loading models in ZenML: through the # Model Registration in ZenML -Models can be registered in ZenML through various methods: CLI, Python SDK, or implicitly during a pipeline run. ZenML Pro users also have a dashboard interface for model registration. +Models can be registered in ZenML through various methods: CLI, Python SDK, or implicitly during a pipeline run. ZenML Pro users can also utilize a dashboard interface for model registration. ## Explicit CLI Registration -To register a model via the CLI, use the command: +To register a model using the CLI, use the following command: ```bash zenml model register iris_logistic_regression --license=... --description=... ``` - -For more options, run `zenml model register --help`. You can also add tags using the `--tag` option. +For additional options, run `zenml model register --help`. You can also add tags using the `--tag` option. ## Explicit Dashboard Registration ZenML Pro users can register models directly from the cloud dashboard. ## Explicit Python SDK Registration -To register a model using the Python SDK, use the following code: +To register a model using the Python SDK: ```python from zenml import Model @@ -11365,7 +11292,7 @@ Client().create_model( ``` ## Implicit Registration by ZenML -Models can also be registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator. Here’s an example: +Models can be registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator: ```python from zenml import pipeline @@ -11382,67 +11309,57 @@ from zenml import Model def train_and_promote_model(): ... ``` - -Running this pipeline creates a new model version linked to the training artifacts. +This approach creates a new model version while linking to the associated artifacts. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-artifacts-from-model.md === -### Summary of Documentation on Loading Artifacts from Models +### Summary of Documentation on Loading Artifacts from a Model -This documentation discusses the process of loading artifacts from a trained model in a two-pipeline project, where the first pipeline handles training and the second performs batch inference. +This documentation outlines how to load artifacts from a model in a two-pipeline project, where the first pipeline handles training and the second performs batch inference using the trained model artifacts. #### Key Points: -1. **Model Artifact Loading**: - - Artifacts can be passed between pipelines using the `Model` class. - - It is crucial to understand when and how to load these artifacts. - -2. **Example Implementation**: - - The example demonstrates a prediction pipeline using a trained model artifact. - -```python -from typing_extensions import Annotated -from zenml import get_pipeline_context, pipeline, Model -import pandas as pd -from sklearn.base import ClassifierMixin - -@step -def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: - return pd.Series(model.predict(data)) - -@pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) -def do_predictions(): - model = get_pipeline_context().model - inference_data = load_data() - predict(model=model.get_model_artifact("trained_model"), data=inference_data) +1. **Model Context**: + - Use `get_pipeline_context().model` to access the model context during pipeline execution. + - The model version (e.g., `ModelStages.PRODUCTION`) may change before execution, affecting artifact retrieval. -if __name__ == "__main__": - do_predictions() -``` +2. **Artifact Loading**: + - Artifacts are loaded at runtime, ensuring the correct version is used during step execution. + - Example of loading a trained model artifact: + ```python + model.get_model_artifact("trained_model") + ``` -3. **Model Context**: - - The `get_pipeline_context().model` retrieves the model context during pipeline execution. - - The model version may change before execution, affecting artifact retrieval. +3. **Pipeline Example**: + - The `do_predictions` pipeline demonstrates how to perform inference: + ```python + @pipeline( + model=Model(name="iris_classifier", version=ModelStages.PRODUCTION), + ) + def do_predictions(): + model = get_pipeline_context().model + inference_data = load_data() + predict(model=model.get_model_artifact("trained_model"), data=inference_data) + ``` 4. **Alternative Method**: - - The same functionality can be achieved using the `Client` class to directly access the model version. - -```python -from zenml.client import Client + - An alternative approach using the `Client` class to directly fetch the model version: + ```python + from zenml.client import Client -@pipeline -def do_predictions(): - model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION) - inference_data = load_data() - predict(model=model.get_model_artifact("trained_model"), data=inference_data) -``` + @pipeline + def do_predictions(): + model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION) + inference_data = load_data() + predict(model=model.get_model_artifact("trained_model"), data=inference_data) + ``` -5. **Artifact Evaluation**: - - Artifact evaluation occurs during the actual step execution, ensuring the correct model version is used. +5. **Execution Timing**: + - Artifact evaluation occurs during the actual step run, ensuring the latest model is utilized. -This summary encapsulates the essential technical information regarding loading model artifacts in ZenML pipelines, maintaining clarity and conciseness. +This concise overview captures the essential technical details for understanding how to load artifacts from a model in ZenML. ================================================== @@ -11450,8 +11367,9 @@ This summary encapsulates the essential technical information regarding loading ### Summary: Associating a Pipeline with a Model -To associate a pipeline with a model in ZenML, use the following code structure: +To associate a pipeline with a model in ZenML, use the `@pipeline` decorator. This allows you to create a new version of the model if it already exists or attach the pipeline to an existing model version. +#### Example Code: ```python from zenml import pipeline from zenml import Model @@ -11461,17 +11379,15 @@ from zenml.enums import ModelStages model=Model( name="ClassificationModel", # Unique model name tags=["MVP", "Tabular"], # Tags for filtering - version=ModelStages.LATEST # Specify model version or stage + version=ModelStages.LATEST # Specify model stage: [STAGING, PRODUCTION] ) ) def my_pipeline(): ... ``` -This code associates the pipeline with the specified model. If the model exists, a new version is created. To attach the pipeline to an existing model version, specify the version explicitly. - -Model configuration can also be managed in a YAML file: - +#### Configuration File Option: +You can also define the model configuration in a YAML file: ```yaml model: name: text_classifier @@ -11479,23 +11395,23 @@ model: tags: ["classifier", "sgd"] ``` -This allows for better organization and management of model attributes. +This setup allows for organized model management and easy version control within your ZenML pipelines. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/README.md === -### ZenML Model Control Plane Overview +# Use the Model Control Plane -A **Model** in ZenML is an entity that consolidates pipelines, artifacts, metadata, and essential business data, representing your ML product's business logic. It can be viewed as a "project" or "workspace." +A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and essential business data, representing your ML product's business logic. It can be viewed as a "project" or "workspace." **Key Points:** -- The **technical model** is a primary artifact associated with a ZenML Model, containing the model file(s) with weights and parameters from training. Other relevant artifacts include training data and production predictions. -- Models are first-class entities in ZenML, accessible through the ZenML API, client, and the [ZenML Pro](https://zenml.io/pro) dashboard. -- Models capture lineage information and support version staging, allowing users to manage different versions (e.g., `Production`) and apply business rules for promotions during training. -- The **Model Control Plane** provides a unified interface for managing models, integrating pipeline logic, artifacts, and business data with the technical model. +- The technical model (model files with weights and parameters) is a primary artifact associated with a ZenML Model, but other artifacts like training data and production predictions are also included. +- Models are first-class citizens in ZenML, managed through a unified API and the ZenML Pro dashboard. +- A Model captures lineage information and supports versioning, allowing you to stage different Model versions (e.g., `Production`) and make promotion decisions based on business rules. +- The Model Control Plane provides a centralized interface for managing models, combining pipeline logic, artifacts, and business data with the technical model. -For a detailed example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). +For a complete example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). ================================================== @@ -11503,55 +11419,64 @@ For a detailed example, refer to the [starter guide](../../../user-guide/starter # Advanced Topics in ZenML -This section outlines advanced features and configurations available in ZenML, a machine learning operations (MLOps) framework. +This section addresses advanced features and configurations in ZenML, focusing on enhancing the functionality and customization of the framework. ## Key Features -1. **Custom Components**: Users can create custom components to extend functionality. Components can be defined using Python functions or classes and must implement specific interfaces. +1. **Custom Components**: Users can create custom components to extend ZenML's capabilities, allowing for tailored data processing and model training. -2. **Pipelines**: ZenML allows for the creation of reusable pipelines that can be executed in various environments. Pipelines can be parameterized and composed of multiple steps. +2. **Pipelines**: ZenML supports complex pipelines that can be configured with various steps, including data ingestion, preprocessing, model training, and evaluation. -3. **Artifact Management**: ZenML provides built-in artifact management to track and store outputs from pipeline steps, enabling reproducibility and traceability. +3. **Artifact Management**: ZenML provides mechanisms for managing artifacts generated during pipeline execution, ensuring reproducibility and traceability. -4. **Integrations**: ZenML supports integration with various tools and platforms (e.g., TensorFlow, PyTorch, AWS, GCP) to facilitate seamless workflows. +4. **Integrations**: The framework integrates with various tools and platforms (e.g., MLflow, TensorFlow, and Kubernetes) to streamline workflows. -5. **Versioning**: Users can version pipelines and components, ensuring that changes are tracked and previous versions can be restored. - -6. **Secrets Management**: ZenML includes features for managing sensitive information, such as API keys and passwords, securely within pipelines. +5. **Versioning**: ZenML supports versioning of pipelines and components, enabling users to track changes and manage different iterations effectively. ## Configuration -- **Settings**: Configuration can be managed through a YAML file or environment variables, allowing customization of pipeline execution and component behavior. - -- **Environment Setup**: Users can set up different environments (e.g., local, staging, production) to manage dependencies and configurations effectively. +- **Settings**: Configuration settings can be adjusted in the ZenML configuration file, allowing users to specify parameters like logging levels, storage backends, and execution environments. + +- **Environment Setup**: Users can set up different environments (e.g., local, cloud) to optimize performance and resource utilization. ## Example Code Snippet ```python from zenml.pipelines import pipeline +from zenml.steps import step + +@step +def data_ingestion(): + # Ingest data + pass + +@step +def model_training(data): + # Train model + pass @pipeline def my_pipeline(): - # Define pipeline steps here - pass + data = data_ingestion() + model_training(data) # Run the pipeline -my_pipeline.run() +my_pipeline() ``` -This summary captures the essential aspects of advanced features and configurations in ZenML, ensuring that critical information is retained for further inquiries. +This concise overview provides essential insights into the advanced capabilities of ZenML, ensuring users can leverage its features effectively. ================================================== === File: docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md === -### Summary of ZenML Prebuilt Image Documentation +### Using a Prebuilt Image for ZenML Pipeline Execution -**Overview**: ZenML allows users to skip building a Docker image for pipeline execution by using a prebuilt image. This can save time and resources, especially when dependencies are large or the local system is slow. +ZenML allows you to skip building a Docker image for your pipeline by using a prebuilt image. This can save time and costs, especially if your dependencies are large or your local system is slow. However, using a prebuilt image means you won't receive updates to your code or dependencies unless they are included in the image. -**Key Points**: -- **Docker Image Building**: By default, ZenML builds a Docker image with a base ZenML image and project dependencies. If no code repository is registered and `allow_download_from_artifact_store` is `False`, the pipeline code is also added. -- **Using Prebuilt Images**: To use a prebuilt image, set the `parent_image` attribute in `DockerSettings` and `skip_build` to `True`. +#### How to Use a Prebuilt Image + +To utilize a prebuilt image, configure the `DockerSettings` class by setting the `parent_image` and `skip_build` attributes: ```python docker_settings = DockerSettings( @@ -11564,83 +11489,93 @@ def my_pipeline(...): ... ``` -- **Image Requirements**: The prebuilt image must contain all necessary dependencies, including: - - **Stack Requirements**: Ensure the image meets the requirements of the ZenML stack. - - ```python - from zenml.client import Client +Ensure the image is accessible to the orchestrator and other components without ZenML's involvement. - stack_name = - Client().set_active_stack(stack_name) - active_stack = Client().active_stack - stack_requirements = active_stack.requirements() - ``` +#### Requirements for the Parent Image - - **Integration Requirements**: Install dependencies for all integrations used in the pipeline. +The prebuilt image must contain: +- All dependencies required to run your pipeline. +- Any code files if no code repository is registered and `allow_download_from_artifact_store` is `False`. - ```python - from zenml.integrations.registry import integration_registry - from zenml.integrations.constants import HUGGINGFACE, PYTORCH - - required_integrations = [PYTORCH, HUGGINGFACE] - integration_requirements = set( - itertools.chain.from_iterable( - integration_registry.select_integration_requirements( - integration_name=integration, - target_os=OperatingSystemType.LINUX, - ) - for integration in required_integrations - ) - ) - ``` +If using an image built by ZenML from a previous run, it can be reused as long as it was built for the same stack. - - **Project-Specific Requirements**: Include any additional dependencies in the Dockerfile. +#### Stack and Integration Requirements - ```Dockerfile - RUN pip install -r FILE - ``` +To determine stack requirements: + +```python +from zenml.client import Client + +stack_name = +Client().set_active_stack(stack_name) +active_stack = Client().active_stack +stack_requirements = active_stack.requirements() +``` - - **System Packages**: Install necessary `apt` packages. +For integration dependencies: - ```Dockerfile - RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES - ``` +```python +from zenml.integrations.registry import integration_registry +from zenml.integrations.constants import HUGGINGFACE, PYTORCH +import itertools + +required_integrations = [PYTORCH, HUGGINGFACE] +integration_requirements = set( + itertools.chain.from_iterable( + integration_registry.select_integration_requirements( + integration_name=integration, + target_os=OperatingSystemType.LINUX, + ) + for integration in required_integrations + ) +) +``` + +#### Project-Specific and System Packages + +For project-specific dependencies, include them in your `Dockerfile`: + +```Dockerfile +RUN pip install -r FILE +``` + +For system packages, use: + +```Dockerfile +RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES +``` + +#### Including Project Code Files -- **Project Code Files**: Ensure the execution environment has the necessary code files: - - If a code repository is registered, ZenML will handle the code. - - If `allow_download_from_artifact_store` is `True`, ZenML uploads the code to the artifact store. - - If both options are disabled, include the code files in the image, ideally placing them in the `/app` directory. +- If a code repository is registered, ZenML will handle code retrieval. +- If `allow_download_from_artifact_store` is `True`, ZenML will upload your code. +- If both options are disabled, include your code files in the image, ideally in the `/app` directory. -**Important Notes**: -- Using a prebuilt image means updates to code or dependencies won't be reflected unless the image is rebuilt. -- Ensure Python, `pip`, and `zenml` are installed in the image. +Ensure Python, `pip`, and `zenml` are installed in your image. ================================================== === File: docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md === -### ZenML Image Building Overview +### ZenML Image Building and File Management -ZenML determines the root directory for source files in the following order: -1. If `zenml init` has been run in the current or parent directory, that directory is used as the root. -2. If not, the parent directory of the executing Python file is used. +ZenML determines the root directory of source files in the following order: +1. If `zenml init` has been executed in the current or parent directory, that directory is the root. +2. If not, the parent directory of the executing Python file is used as the root. -#### DockerSettings Attributes -You can control file handling with these attributes in `DockerSettings`: +You can control file handling in the root directory using the `DockerSettings` attributes: -- **`allow_download_from_code_repository`**: If `True` and the files are in a registered code repository without local changes, files are downloaded from the repository instead of being included in the image. - -- **`allow_download_from_artifact_store`**: If the previous option is `False` or no suitable repository exists, and this is `True`, ZenML archives and uploads code to the artifact store. +- **`allow_download_from_code_repository`**: If `True`, files from a registered code repository (with no local changes) will be downloaded instead of included in the image. +- **`allow_download_from_artifact_store`**: If the previous option is `False`, and no suitable code repository exists, files will be archived and uploaded to the artifact store if this is `True`. +- **`allow_including_files_in_images`**: If both previous options are `False`, files will be included in the Docker image if this is `True`. Modifications to code files will require rebuilding the Docker image. -- **`allow_including_files_in_images`**: If the previous options are `False`, this option allows including files in the Docker image, necessitating a new image build for each code modification. +**Warning**: Setting all attributes to `False` is not recommended, as it may lead to unintended behavior. You must ensure all files are correctly located in the Docker images used for pipeline execution. -> **Warning**: Setting all attributes to `False` is not recommended, as it may lead to unintended behavior. You must ensure all files are correctly placed in the Docker images for pipeline execution. +### File Exclusion and Inclusion -#### File Management - **Excluding Files**: Use a `.gitignore` file to exclude files when downloading from a code repository. - -- **Including Files**: To exclude files from the Docker image and reduce size, use a `.dockerignore` file. This can be done by: - - Creating a `.dockerignore` file in the source root. +- **Including Files**: To exclude files when including them in the Docker image, use a `.dockerignore` file. This can be done by either: + - Creating a `.dockerignore` in the source root directory. - Specifying a `.dockerignore` file explicitly: ```python @@ -11651,7 +11586,7 @@ def my_pipeline(...): ... ``` -This setup ensures efficient management of files in ZenML Docker images. +This setup allows for efficient management of files in ZenML Docker images. ================================================== @@ -11659,11 +11594,11 @@ This setup ensures efficient management of files in ZenML Docker images. ### Summary of Docker Settings Customization in ZenML -In ZenML, you can customize Docker settings at the step level, allowing different steps in a pipeline to use distinct Docker images. By default, all steps utilize the same Docker image defined at the pipeline level. To specify a different image for a step, you can use the `DockerSettings` in the step decorator or within a configuration file. +ZenML allows customization of Docker settings at the step level within a pipeline. By default, all steps use the same Docker image defined at the pipeline level. However, specific steps may require different Docker images due to unique requirements. This can be achieved by using the `DockerSettings` in the step decorator or through a configuration file. #### Customizing Docker Settings in Step Decorator -You can define Docker settings directly in the step decorator as follows: +To customize Docker settings directly in the step decorator, use the following code: ```python from zenml import step @@ -11682,7 +11617,7 @@ def training(...): #### Customizing Docker Settings in Configuration File -Alternatively, you can set Docker configurations in a YAML configuration file: +Alternatively, you can define Docker settings in a configuration file as shown below: ```yaml steps: @@ -11698,7 +11633,7 @@ steps: - numpy ``` -This allows for flexibility in managing dependencies and integrations specific to each step in the pipeline. +This flexibility allows for tailored environments for different steps within the same pipeline. ================================================== @@ -11710,7 +11645,7 @@ To use a private PyPI repository that requires authentication, follow these step 1. **Store Credentials Securely**: Use environment variables for sensitive information. 2. **Configure Package Managers**: Set up `pip` or `poetry` to utilize these credentials during package installation. -3. **Custom Docker Images**: Optionally, create Docker images with the required authentication. +3. **Custom Docker Images**: Consider creating Docker images with the necessary authentication pre-configured. #### Example Code for Authentication Setup @@ -11737,7 +11672,7 @@ if __name__ == "__main__": my_pipeline() ``` -**Note**: Handle credentials with care and use secure methods for managing and sharing authentication information within your team. +**Important Note**: Handle credentials with care. Always use secure methods for managing and distributing authentication information within your team. ================================================== @@ -11746,65 +11681,64 @@ if __name__ == "__main__": ### Summary: Reusing Builds in ZenML #### Overview -This documentation explains how to reuse builds in ZenML to enhance pipeline efficiency. A build encapsulates a pipeline and its stack, including Docker images and optionally the pipeline code. +This guide explains how to reuse builds in ZenML to enhance pipeline efficiency. A build encapsulates a pipeline and its stack, including Docker images and optionally the pipeline code. #### What is a Build? -- A **pipeline build** contains the Docker images and requirements from the stack, integrations, and user. -- List builds using: - ```bash - zenml pipeline builds list --pipeline_id='startswith:ab53ca' - ``` -- Create a build with: - ```bash - zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance - ``` +A build represents a specific execution of a pipeline with its associated stack. It contains necessary Docker images and can optionally include pipeline code. + +**List Builds:** +```bash +zenml pipeline builds list --pipeline_id='startswith:ab53ca' +``` + +**Create a Build:** +```bash +zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance +``` #### Reusing Builds -- ZenML automatically reuses existing builds if they match the pipeline and stack. -- To force a specific build, pass the build ID to the `build` parameter in the pipeline configuration. -- Note: Reusing a build executes the code in the Docker image, not local changes. To include local changes, disconnect your code from the build by registering a code repository or using the artifact store. +ZenML automatically reuses existing builds if they match the pipeline and stack. You can specify a build ID to force the use of a particular build. Note that using a custom build will execute the code bundled in the Docker image, not your local changes. To incorporate local changes while reusing a build, disconnect your code from the build by either registering a code repository or using the artifact store. #### Using the Artifact Store -- By default, ZenML uploads your code to the artifact store if no code repository is detected and `allow_download_from_artifact_store` is not set to `False`. +ZenML can upload your code to the artifact store by default if no code repository is detected. This allows for code reuse without needing to rebuild Docker images. -#### Connecting Code Repositories -- Registering a code repository speeds up Docker builds and allows code iteration without rebuilding images. -- ZenML automatically reuses builds when a clean repository state is detected. -- Ensure relevant integrations are installed (e.g., GitHub): - ```sh - zenml integration install github - ``` +#### Code Repositories +Connecting a git repository speeds up Docker builds by avoiding the inclusion of source files during image creation. Instead, files are downloaded into the container before execution. ZenML automatically identifies matching builds, eliminating the need to specify build IDs in a clean repository state. -#### Detecting Local Code Repository Checkouts -- ZenML checks if the files used in a pipeline are tracked in registered repositories by computing the source root and verifying its inclusion. +**Install GitHub Integration:** +```sh +zenml integration install github +``` + +#### Detecting Local Code Repositories +ZenML checks if the files used in a pipeline are tracked in registered code repositories, computing the source root and verifying its inclusion in a local checkout. #### Tracking Code Versions -- If a local repository is detected, ZenML stores a reference to the current commit for the pipeline run, ensuring reproducibility. This only occurs with a clean local checkout. +When a local repository is detected, ZenML stores the current commit reference for the pipeline run. This tracking only occurs if the local checkout is clean, ensuring the pipeline runs with the exact code from the repository. -#### Tips and Best Practices +#### Best Practices - Ensure the local checkout is clean and the latest commit is pushed to avoid file download failures. -- For options to disable or enforce file downloading, refer to the documentation on Docker settings. +- For options to disable or enforce file downloading, refer to the Docker settings documentation. -This guide provides essential practices for reusing builds in ZenML, enhancing pipeline performance while maintaining code integrity. +This guide provides essential practices for reusing builds in ZenML, enhancing efficiency while ensuring code integrity. ================================================== === File: docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md === -# Summary of Specifying Pip Dependencies and Apt Packages - -## Overview -This documentation outlines how to specify pip and apt dependencies in ZenML pipelines, applicable only for remote pipelines (not local ones). When using a remote orchestrator, a Dockerfile is dynamically generated to build the Docker image. +# Specifying Pip Dependencies and Apt Packages -## Key Points +**Note:** Configuration for pip and apt dependencies applies only to remote pipelines, not local ones. -- **DockerSettings Import**: Use `from zenml.config import DockerSettings`. +When a pipeline runs with a remote orchestrator, a Dockerfile is dynamically generated to build the Docker image. You can import `DockerSettings` using: -- **Default Behavior**: ZenML installs all packages required by the active stack automatically. +```python +from zenml.config import DockerSettings +``` -### Methods to Specify Dependencies +By default, ZenML installs all packages required by your active stack. You can specify additional packages in several ways: -1. **Replicate Local Environment**: +1. **Replicate Local Environment:** ```python docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze") @@ -11813,7 +11747,7 @@ This documentation outlines how to specify pip and apt dependencies in ZenML pip ... ``` -2. **Custom Command for Requirements**: +2. **Custom Command for Requirements:** ```python docker_settings = DockerSettings(replicate_local_python_environment=[ "poetry", "export", "--extras=train", "--format=requirements.txt" @@ -11824,7 +11758,7 @@ This documentation outlines how to specify pip and apt dependencies in ZenML pip ... ``` -3. **Specify Requirements in Code**: +3. **Specify Requirements in Code:** ```python docker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"]) @@ -11833,7 +11767,7 @@ This documentation outlines how to specify pip and apt dependencies in ZenML pip ... ``` -4. **Use a Requirements File**: +4. **Specify a Requirements File:** ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") @@ -11842,7 +11776,7 @@ This documentation outlines how to specify pip and apt dependencies in ZenML pip ... ``` -5. **Specify ZenML Integrations**: +5. **Specify ZenML Integrations:** ```python from zenml.integrations.constants import PYTORCH, EVIDENTLY @@ -11853,7 +11787,7 @@ This documentation outlines how to specify pip and apt dependencies in ZenML pip ... ``` -6. **Specify Apt Packages**: +6. **Specify Apt Packages:** ```python docker_settings = DockerSettings(apt_packages=["git"]) @@ -11862,7 +11796,7 @@ This documentation outlines how to specify pip and apt dependencies in ZenML pip ... ``` -7. **Disable Automatic Stack Requirements Installation**: +7. **Disable Automatic Stack Requirement Installation:** ```python docker_settings = DockerSettings(install_stack_requirements=False) @@ -11871,7 +11805,7 @@ This documentation outlines how to specify pip and apt dependencies in ZenML pip ... ``` -8. **Custom Docker Settings for Steps**: +8. **Custom Docker Settings for Steps:** ```python docker_settings = DockerSettings(requirements=["tensorflow"]) @@ -11880,15 +11814,13 @@ This documentation outlines how to specify pip and apt dependencies in ZenML pip ... ``` -### Installation Order -ZenML installs dependencies in the following order: -1. Local Python environment packages -2. Stack requirements (if not disabled) -3. Required integrations -4. Specified requirements +**Installation Order:** +- Local Python environment packages +- Stack requirements (if not disabled) +- Required integrations +- Specified requirements -### Additional Installer Arguments -You can specify extra arguments for the Python package installer: +**Additional Installer Arguments:** ```python docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000}) @@ -11897,8 +11829,7 @@ def my_pipeline(...): ... ``` -### Experimental Features -To use `uv` for faster package installation: +**Experimental `uv` Installer:** ```python docker_settings = DockerSettings(python_package_installer="uv") @@ -11906,9 +11837,7 @@ docker_settings = DockerSettings(python_package_installer="uv") def my_pipeline(...): ... ``` -*Note: `uv` is experimental and may lead to installation errors; switch back to `pip` if issues arise.* - -For more details on using `uv` with PyTorch, refer to the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). +*Note:* `uv` may be less stable than `pip`. Refer to [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/) for details on using `uv` with PyTorch. ================================================== @@ -11916,18 +11845,18 @@ For more details on using `uv` with PyTorch, refer to the [Astral Docs](https:// ### Summary of ZenML Docker Integration -ZenML allows users to build dynamic parent Docker images for pipelines using a custom Dockerfile, build context, and options. The build process operates as follows: +ZenML allows users to specify a custom Dockerfile, build context directory, and build options for dynamic image creation during pipeline execution. The build process operates as follows: -- **No Dockerfile Specified**: If requirements, environment variables, or file copying necessitate an image build, ZenML creates an image. Otherwise, it uses the specified `parent_image`. +- **No Dockerfile Specified**: If requirements or environment variables necessitate an image build, ZenML builds a new image; otherwise, it uses the specified `parent_image`. -- **Dockerfile Specified**: ZenML builds an image from the provided Dockerfile. If further requirements necessitate an additional image, it will be built; otherwise, the initial image is used. +- **Dockerfile Specified**: ZenML builds an image from the specified Dockerfile. If additional requirements necessitate another image, it builds a second image; otherwise, it uses the first image for the pipeline. The installation of packages follows this order (each step optional): -1. Local Python environment packages. +1. Packages from the local Python environment. 2. Packages from the `requirements` attribute. 3. Packages from `required_integrations` and stack requirements. -**Note**: The intermediate image may also be used directly for executing pipeline steps depending on the `DockerSettings` configuration. +**Note**: The intermediate image may also be used directly to execute pipeline steps based on Docker settings. ### Example Code ```python @@ -11945,21 +11874,19 @@ def my_pipeline(...): ... ``` -This concise setup allows for flexible Docker image management within ZenML pipelines. +This concise overview captures the essential technical details and code structure necessary for understanding ZenML's Docker integration. ================================================== === File: docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md === -### Summary: Defining the Image Builder in ZenML +### Image Builder Definition in ZenML -ZenML executes pipeline steps sequentially in the local Python environment. When using remote orchestrators or step operators, it builds Docker images for isolated execution. By default, ZenML uses the local Docker client, which requires Docker installation and permissions. +ZenML executes pipeline steps sequentially in the active Python environment when running locally. For remote orchestrators or step operators, ZenML builds Docker images to run pipelines in isolated environments. By default, execution environments are created locally using the local Docker client, which requires Docker installation and permissions. -ZenML provides **image builders**, a specialized stack component for building and pushing Docker images in a dedicated environment. If no image builder is configured, ZenML defaults to the local image builder, ensuring consistency across builds. +ZenML provides **image builders**, a specialized stack component for building and pushing Docker images in a dedicated environment. If no image builder is configured in the stack, ZenML defaults to the local image builder, ensuring consistency across builds. In this case, the image builder environment matches the client environment. -Users do not need to interact with the image builder directly; as long as it is part of the active ZenML stack, it will be utilized automatically by components requiring container image builds. - -For more details, refer to the documentation on [image builders](../../component-guide/image-builders/image-builders.md) and [stacks](../../user-guide/production-guide/understand-stacks.md). +Users do not need to interact directly with the image builder in their code. As long as the desired image builder is part of the active ZenML stack, it will be automatically utilized by any component that requires container image building. ================================================== @@ -11967,39 +11894,41 @@ For more details, refer to the documentation on [image builders](../../component ### Summary: Using Docker Images to Run Your Pipeline -When running a pipeline with a remote orchestrator, a Dockerfile is dynamically generated to build a Docker image using ZenML. The Dockerfile includes: +#### Overview +When running a pipeline with a remote orchestrator, ZenML dynamically generates a Dockerfile at runtime to build a Docker image. The Dockerfile includes the following steps: -1. **Parent Image**: Starts from a ZenML-installed parent image, typically the official ZenML image. Custom parent images can be specified. -2. **Pip Dependencies**: Automatically installs required integrations. Custom dependencies can be added. -3. **Source Files**: Optionally copies source files into the Docker container. +1. **Base Image**: Starts from a parent image with ZenML installed, typically the official ZenML image. +2. **Dependency Installation**: Automatically installs required pip dependencies based on stack integrations. +3. **Source Files**: Optionally copies source files into the Docker container for execution. 4. **Environment Variables**: Sets user-defined environment variables. -For customization, use the `DockerSettings` class: +For customization options, refer to the [DockerSettings object](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). + +#### Configuring Docker Settings +You can configure Docker settings using the `DockerSettings` class: ```python from zenml.config import DockerSettings ``` -#### Configuring Docker Settings +**Pipeline Configuration**: Apply settings to all pipeline steps: -- **Pipeline Level**: Applies settings to all steps. - ```python docker_settings = DockerSettings() @pipeline(settings={"docker": docker_settings}) -def my_pipeline() -> None: +def my_pipeline(): my_step() ``` -- **Step Level**: Allows separate Docker images for different steps. +**Step Configuration**: Apply settings to individual steps: ```python @step(settings={"docker": docker_settings}) -def my_step() -> None: +def my_step(): pass ``` -- **YAML Configuration**: Define settings in a YAML file. +**YAML Configuration**: Use a YAML file for settings: ```yaml settings: @@ -12012,8 +11941,7 @@ steps: ... ``` -#### Specifying Build Options - +#### Specifying Docker Build Options To pass build options to the image builder: ```python @@ -12023,15 +11951,16 @@ def my_pipeline(...): ... ``` -**Note**: On MacOS with ARM architecture, specify the target platform: +**MacOS ARM Architecture**: Specify the target platform for local Docker caching: ```python docker_settings = DockerSettings(build_config={"build_options": {"platform": "linux/amd64"}}) ``` -#### Using a Custom Parent Image +#### Custom Parent Images +You can specify a custom parent image or Dockerfile for more control over the environment. Ensure the image has Python, pip, and ZenML installed. -To use a pre-built parent image: +**Using a Pre-built Parent Image**: ```python docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag") @@ -12040,29 +11969,26 @@ def my_pipeline(...): ... ``` -To skip Docker builds: +**Skipping Docker Builds**: ```python docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag", skip_build=True) +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... ``` -**Warning**: This is an advanced feature that may lead to unintended behavior. Ensure code files are included in the specified image. - -For more details, refer to the [DockerSettings documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). +**Warning**: This feature may lead to unintended behavior; ensure your code files are included in the specified image. For details, refer to [this guide](./use-a-prebuilt-image.md). ================================================== === File: docs/book/how-to/customize-docker-builds/README.md === -### Using Docker Images to Run Your Pipeline - -ZenML executes pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, ZenML builds Docker images to run pipelines in an isolated environment. This section covers how to customize the Docker build process. +### Customize Docker Builds -**Key Points:** -- **Docker Integration**: ZenML utilizes Docker for running pipelines in a controlled environment. -- **Execution Context**: Local execution uses the active Python environment, while remote execution relies on Docker images. +ZenML runs pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, it builds Docker images to execute pipelines in an isolated environment. This section covers how to manage the dockerization process effectively. -For further details on orchestrators and step operators, refer to the respective guides on cloud orchestration and step operators. +For more details, refer to the documentation on [cloud orchestration](../../user-guide/production-guide/cloud-orchestration.md) and [step operators](../../component-guide/step-operators/step-operators.md). ================================================== @@ -12072,59 +11998,48 @@ For further details on orchestrators and step operators, refer to the respective This section outlines the key components and processes involved in developing pipelines using ZenML. -## Key Concepts - -- **Pipelines**: A sequence of steps that define a workflow for data processing and machine learning tasks. -- **Steps**: Individual components within a pipeline that perform specific tasks (e.g., data ingestion, transformation, model training). -- **Artifacts**: Outputs generated by steps, which can be inputs for subsequent steps. - -## Pipeline Creation +## Key Components: +1. **Pipelines**: A series of steps that define the workflow for data processing and model training. +2. **Steps**: Individual tasks within a pipeline, such as data ingestion, preprocessing, training, and evaluation. +3. **Artifacts**: Outputs generated by each step, which can be used as inputs for subsequent steps. -1. **Define Steps**: Create functions decorated with `@step` to define the logic for each step. +## Development Process: +1. **Define Steps**: Use decorators to define each step in the pipeline. ```python - from zenml.steps import step + @step + def data_ingestion(): + # Code for data ingestion + pass @step - def data_ingestion() -> DataFrame: - # Logic for data ingestion - return df + def data_preprocessing(data): + # Code for preprocessing + pass ``` -2. **Build Pipeline**: Use the `@pipeline` decorator to combine steps into a pipeline. +2. **Create Pipeline**: Combine steps into a pipeline. ```python - from zenml.pipelines import pipeline - @pipeline - def my_pipeline(data_ingestion, data_processing, model_training): + def my_pipeline(): data = data_ingestion() - processed_data = data_processing(data) - model = model_training(processed_data) + processed_data = data_preprocessing(data) ``` 3. **Run Pipeline**: Execute the pipeline using the ZenML CLI or programmatically. ```python - from zenml.pipelines import run - - run(my_pipeline) + pipeline_instance = my_pipeline() + pipeline_instance.run() ``` -## Pipeline Configuration - -- **Parameters**: Pass parameters to steps for customization. -- **Environment**: Define the execution environment (e.g., local, cloud). - -## Versioning and Reproducibility - -- **Version Control**: Use Git or similar tools to track changes in pipeline code. -- **Artifact Tracking**: ZenML automatically tracks artifacts for reproducibility. +## Configuration: +- **Parameters**: Customize steps with parameters to control behavior. +- **Environment**: Define execution environments for reproducibility. -## Integration with Other Tools +## Best Practices: +- Modularize steps for reusability. +- Version control artifacts for tracking changes. -ZenML integrates with various tools for data storage, orchestration, and model deployment, enhancing the pipeline's capabilities. - -## Conclusion - -ZenML provides a structured approach to pipeline development, ensuring modularity, reusability, and reproducibility in machine learning workflows. +This summary provides a foundational understanding of pipeline development in ZenML, focusing on the structure, creation, and execution of pipelines while highlighting best practices. ================================================== @@ -12132,28 +12047,29 @@ ZenML provides a structured approach to pipeline development, ensuring modularit # Limitations of Defining Steps in Notebook Cells -To run ZenML steps defined in notebook cells remotely (with a remote orchestrator or step operator), the following conditions must be met: +To run ZenML steps defined in notebook cells remotely (using a remote orchestrator or step operator), the following conditions must be met: -- The cell can only contain Python code; Jupyter magic commands or shell commands (starting with `%` or `!`) are not allowed. -- The cell **must not** call code from other notebook cells. However, functions or classes imported from Python files are permitted. -- The cell **must not** rely on imports from previous cells. All necessary imports, including ZenML imports (e.g., `from zenml import step`), must be included within the cell itself. +- The cell must contain only Python code; Jupyter magic commands or shell commands (starting with `%` or `!`) are not allowed. +- The cell must not call code from other notebook cells. However, importing functions or classes from Python files is permitted. +- The cell must independently handle all necessary imports, including ZenML imports (e.g., `from zenml import step`), without relying on imports from previous cells. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md === -### Running a Single Step from a Notebook +### Summary of Running a Single Step from a Notebook -To execute a single step remotely from a notebook, call the step like a standard Python function. ZenML will create a pipeline with just that step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining steps in notebook cells. +To execute a single step remotely from a notebook using ZenML, call the step as a Python function. ZenML will create a pipeline with the specified step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining remote steps in notebook cells. -#### Example Code +#### Code Example ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC -from typing import Tuple, Annotated +from typing import Tuple +from typing_extensions import Annotated @step(step_operator="") def svc_trainer( @@ -12168,43 +12084,41 @@ def svc_trainer( print(f"Train accuracy: {train_acc}") return model, train_acc -X_train = pd.DataFrame(...) # Define your training data -y_train = pd.Series(...) # Define your training labels +# Prepare training data +X_train = pd.DataFrame(...) +y_train = pd.Series(...) # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` -This code defines a step to train a Support Vector Classifier (SVC) and runs it directly, creating a pipeline that executes on the specified stack. +This code defines a step for training a Support Vector Classifier (SVC) and demonstrates how to call it directly, resulting in a pipeline execution on the active stack. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/README.md === -### Summary: Running Remote Pipelines from Jupyter Notebooks with ZenML - -ZenML allows you to define and execute steps and pipelines in Jupyter Notebooks remotely. The process involves extracting code from notebook cells and running it as Python modules within Docker containers. To successfully run remote pipelines, specific conditions must be met in the notebook cells. +### Summary: Running Remote Pipelines from Jupyter Notebooks -#### Key Points: -- **Remote Execution**: ZenML extracts code from Jupyter Notebook cells to run as Python modules in Docker containers. -- **Conditions**: Notebook cells must adhere to certain requirements for the execution to work properly. +ZenML allows the definition and execution of steps and pipelines directly within Jupyter Notebooks. The process involves extracting code from notebook cells and executing it as Python modules within Docker containers for remote execution. -#### Additional Resources: -- **Limitations**: Refer to [Limitations of Defining Steps in Notebook Cells](limitations-of-defining-steps-in-notebook-cells.md). -- **Single Step Execution**: See [Run a Single Step from a Notebook](run-a-single-step-from-a-notebook.md). +**Key Points:** +- **Execution Environment:** Notebook cells must adhere to specific conditions for ZenML to function properly. +- **Documentation Links:** + - [Limitations of defining steps in notebook cells](limitations-of-defining-steps-in-notebook-cells.md) + - [Run a single step from a notebook](run-a-single-step-from-a-notebook.md) -![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) +This setup enables efficient remote execution of data workflows while leveraging the interactive capabilities of Jupyter Notebooks. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md === -### Summary of Documentation for Autogenerating a YAML Configuration Template +### Summary of Configuration File Generation in ZenML -**Purpose:** -To create a YAML configuration template for a specific pipeline using the `.write_run_configuration_template()` method, which generates a file with all options commented out for customization. +To create a configuration file for your pipeline, you can use the `.write_run_configuration_template()` method, which generates a YAML template with all options commented out for customization. -**Code Example:** +#### Code Example ```python from zenml import pipeline @@ -12216,30 +12130,26 @@ def simple_ml_pipeline(parameter: int): simple_ml_pipeline.write_run_configuration_template(path="") ``` -**Generated YAML Configuration Template Overview:** -- **Root Level:** - - `build`: Pipeline build information. - - `enable_artifact_metadata`, `enable_artifact_visualization`, `enable_cache`, `enable_step_logs`: Optional settings for pipeline behavior. - - `model`: Metadata about the model (name, description, version, etc.). - - `parameters`: Optional parameters for the pipeline. - - `run_name`: Optional name for the run. - - `schedule`: Scheduling options (catchup, cron expression, etc.). - - `settings`: Configuration for Docker and resources (CPU, GPU, memory). +#### Generated YAML Configuration Template +The generated YAML template includes various sections, such as: + +- **Build and Settings**: Options for pipeline build and Docker settings. +- **Model Metadata**: Fields for model details like name, description, and tags. +- **Parameters**: Optional parameters for the pipeline. +- **Schedule**: Configuration for scheduling pipeline runs. +- **Steps**: Detailed settings for each step in the pipeline, including: + - **Load Data**: Metadata and settings for the data loading step. + - **Train Model**: Metadata and settings for the model training step. -- **Steps:** - - Each step (e.g., `load_data`, `train_model`) includes: - - Metadata and settings similar to the root level. - - `outputs`: Definition of output artifacts. - - `parameters`: Step-specific parameters. - - `settings`: Docker configuration and resource allocation. +Each step can include options for enabling artifact metadata, caching, logging, and Docker settings. -**Customization:** -To configure the pipeline with a specific stack, use: +#### Additional Configuration +You can also specify a stack when generating the template by using: ```python simple_ml_pipeline.write_run_configuration_template(stack=) ``` -This documentation provides a comprehensive guide to generating and customizing a YAML configuration template for ZenML pipelines, ensuring users can tailor their settings effectively. +This allows for tailored configurations based on the selected stack. ================================================== @@ -12247,7 +12157,7 @@ This documentation provides a comprehensive guide to generating and customizing ### Summary: Configuring Runtime Settings in ZenML -**Overview**: ZenML allows configuration of pipeline runtime settings through a central concept called `BaseSettings`. These settings enable customization of stack components and pipelines. +**Overview**: ZenML allows configuration of runtime settings for stack components and pipelines through a central concept called `BaseSettings`. These settings enable customization of resources, containerization, and stack component-specific configurations. #### Types of Settings 1. **General Settings**: Applicable to all ZenML pipelines. @@ -12255,7 +12165,7 @@ This documentation provides a comprehensive guide to generating and customizing - `DockerSettings`: Configure Docker settings. - `ResourceSettings`: Specify resource requirements. -2. **Stack-Component-Specific Settings**: Used for specific stack components, identified by keys like `` or `.`. +2. **Stack-Component-Specific Settings**: Provide runtime configurations for specific stack components, identified by keys in the format `` or `.`. - Examples: - `SkypilotAWSOrchestratorSettings` - `KubeflowOrchestratorSettings` @@ -12266,16 +12176,16 @@ This documentation provides a comprehensive guide to generating and customizing - `VertexStepOperatorSettings` - `AzureMLStepOperatorSettings` -#### Registration-Time vs. Real-Time Settings -- **Registration-Time Settings**: Static configurations set during component registration, e.g., `tracking_url` for MLflow. -- **Real-Time Settings**: Dynamic configurations that can change with each pipeline run, e.g., `experiment_name`. +#### Registration-Time vs Real-Time Settings +- **Registration-Time Settings**: Static configurations set during component registration (e.g., `tracking_url` for MLflow). +- **Real-Time Settings**: Dynamic configurations that can change with each pipeline run (e.g., `experiment_name`). Default values can be set during registration, which can be overridden at runtime. #### Key Specification for Settings -- Use keys that match the pattern `` or `.`. If only the category is specified, settings apply to any compatible component flavor. +- Use the correct key format (`` or `.`) to ensure settings are applied to the correct component flavor. If the settings do not match the component flavor, they will be ignored. -#### Code Examples +#### Example Code Snippets **Python Code**: ```python @@ -12283,7 +12193,6 @@ Default values can be set during registration, which can be overridden at runtim def my_step(): ... -# Using the class @step(step_operator="nameofstepoperator", settings={"step_operator": SagemakerStepOperatorSettings(instance_type="m7g.medium")}) def my_step(): ... @@ -12300,22 +12209,21 @@ steps: instance_type: m7g.medium ``` -This documentation provides a comprehensive guide on configuring runtime settings in ZenML, focusing on the types of settings, their application, and how to specify them correctly. +This documentation provides a foundational understanding of configuring runtime settings in ZenML, emphasizing the distinction between general and component-specific settings, as well as their application in both registration and execution contexts. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/retrieve-used-configuration-of-a-run.md === -To extract the configuration used in a completed pipeline run, you can access the `config` attribute of the pipeline run or a specific step within it. Here's how to do it: +To extract the configuration used in a completed pipeline run, you can access the `config` attribute of the pipeline run or a specific step within it. +### Code Example: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run() - # General configuration for the pipeline pipeline_run.config - # Configuration for a specific step pipeline_run.steps[].config ``` @@ -12328,11 +12236,13 @@ This allows you to retrieve both the overall pipeline configuration and the conf ### Configuration File Usage in ZenML -**Best Practice**: Use YAML files for configuration to separate it from code, although configuration can also be specified directly in code. +**Best Practices:** +It is recommended to use a YAML configuration file to separate configuration from code, although configurations can also be specified directly in code. -**Applying Configuration**: Use the `with_options(config_path=)` method to apply configurations to a pipeline. +**Applying Configuration:** +Use the `with_options(config_path=)` pattern to apply configurations to a pipeline. -#### Example YAML Configuration +**Example YAML Configuration:** ```yaml enable_cache: False @@ -12344,7 +12254,7 @@ steps: enable_cache: False ``` -#### Example Python Code +**Example Python Code:** ```python from zenml import step, pipeline @@ -12356,11 +12266,12 @@ def load_data(dataset_name: str) -> dict: def simple_ml_pipeline(dataset_name: str): load_data(dataset_name) -if __name__ == "__main__": +if __name__=="__main__": simple_ml_pipeline.with_options(config_path=)() ``` -**Functionality**: The above code runs `simple_ml_pipeline` with caching disabled for `load_data` and sets `dataset_name` to `"best_dataset"`. +**Functionality:** +The above code runs `simple_ml_pipeline` with caching disabled for `load_data` and sets `dataset_name` to "best_dataset". ================================================== @@ -12368,11 +12279,11 @@ if __name__ == "__main__": ### Configuration Hierarchy in ZenML -In ZenML, configurations can be set at different levels, with specific rules governing their precedence: +In ZenML, configuration settings follow a specific hierarchy: 1. **Code Configurations**: Override YAML file configurations. 2. **Step-Level Configurations**: Override pipeline-level configurations. -3. **Attribute Merging**: Dictionaries are merged for attributes. +3. **Attribute Merging**: Dictionaries for attributes are merged. ### Example Code @@ -12392,7 +12303,7 @@ def train_model(data: dict) -> None: def simple_ml_pipeline(parameter: int): ... -# Configuration Results +# Configuration results train_model.configuration.settings["resources"] # -> cpu_count: 2, gpu_count=1, memory="2GB" @@ -12401,8 +12312,9 @@ simple_ml_pipeline.configuration.settings["resources"] ``` ### Key Points + - Step configurations take precedence over pipeline configurations. -- Resource settings can be specified for both steps and pipelines, with the step settings overriding the pipeline settings when applicable. +- Resource settings can be defined at both the step and pipeline levels, with the step settings overriding the pipeline settings where applicable. ================================================== @@ -12410,11 +12322,12 @@ simple_ml_pipeline.configuration.settings["resources"] # Configuration Overview -This documentation provides a sample YAML configuration for a ZenML pipeline, highlighting key parameters and their purposes. For a complete list of all possible keys, refer to the specified page. +This document outlines the configuration options available in a YAML file for a ZenML pipeline. Below is a concise summary of key components and their significance. ## Sample YAML Configuration + ```yaml -build: dcd6fafb-c200-4e85-8328-428bef98d804 # Docker image ID +build: dcd6fafb-c200-4e85-8328-428bef98d804 enable_artifact_metadata: True enable_artifact_visualization: False @@ -12484,19 +12397,20 @@ steps: instance_type: m7g.medium ``` -## Key Points +## Key Configuration Elements ### `enable_XXX` Parameters -- **enable_artifact_metadata**: Control metadata association with artifacts. -- **enable_artifact_visualization**: Attach visualizations to artifacts. -- **enable_cache**: Utilize caching. -- **enable_step_logs**: Enable step logging. +These boolean flags control various behaviors: +- `enable_artifact_metadata`: Associates metadata with artifacts. +- `enable_artifact_visualization`: Attaches visualizations of artifacts. +- `enable_cache`: Enables caching. +- `enable_step_logs`: Enables tracking of step logs. ### `build` ID -Specifies the UUID of the Docker image to use, skipping Docker build for remote orchestrators. +Specifies the UUID of the Docker image to use. If provided, Docker image building is skipped. -### Configuring the `model` -Defines the ZenML model used in the pipeline: +### `model` +Defines the ZenML Model used in the pipeline: ```yaml model: name: "ModelName" @@ -12516,16 +12430,13 @@ steps: parameters: gamma: 0.001 ``` -Step parameters take precedence over pipeline parameters. +The step-level parameters take precedence. -### Setting the `run_name` -Dynamic run names can be set to avoid conflicts when scheduling runs: -```yaml -run_name: -``` +### `run_name` +Specifies a unique name for the run. Avoid static values to prevent conflicts. ### Stack Component Runtime Settings -Settings for Docker and resource configurations: +Configurations for Docker and resource settings: ```yaml settings: docker: @@ -12537,13 +12448,13 @@ settings: memory: "4Gb" ``` -### Step-specific Configuration +### Step-Specific Configuration Certain configurations apply only at the step level: -- **experiment_tracker**: Name of the experiment tracker for the step. -- **step_operator**: Name of the step operator for the step. -- **outputs**: Configuration for output artifacts, including materializer sources. +- `experiment_tracker`: Name of the experiment tracker for the step. +- `step_operator`: Name of the step operator for the step. +- `outputs`: Configuration for output artifacts, including materializer source paths. -Refer to the documentation for more details on each configuration aspect and their implications. +This summary retains critical technical information while providing a clear overview of the configuration options available for ZenML pipelines. ================================================== @@ -12551,14 +12462,11 @@ Refer to the documentation for more details on each configuration aspect and the ZenML allows for easy configuration and execution of pipelines using YAML files at runtime. These configuration files enable users to set parameters, manage caching behavior, and configure stack components. Key topics include: -- **What can be configured**: Details on configurable elements. -- **Configuration hierarchy**: Structure of configuration settings. -- **Autogenerate a template YAML file**: Instructions for creating a template. +- **What Can Be Configured**: Details on configurable elements in ZenML. +- **Configuration Hierarchy**: Explanation of the structure and precedence of configurations. +- **Autogenerate a Template YAML File**: Instructions for creating a base YAML configuration file automatically. -For more information, refer to the linked sections: -- [What can be configured](what-can-be-configured.md) -- [Configuration hierarchy](configuration-hierarchy.md) -- [Autogenerate a template YAML file](autogenerate-a-template-yaml-file.md) +For more information, refer to the linked sections on each topic. ================================================== @@ -12566,77 +12474,90 @@ For more information, refer to the linked sections: ### Summary: Creating Pipeline Variants for Local Development and Production in ZenML -ZenML allows for the creation of different pipeline variants tailored for local development and production environments, facilitating rapid iteration during development while ensuring robust production setups. The key methods to achieve this include: +When developing ZenML pipelines, it's useful to create different variants for local development and production to facilitate quick iterations while maintaining a robust production setup. This can be achieved through: -1. **Using Configuration Files**: Define pipeline configurations in YAML files. For example: - - **Development Configuration (`config_dev.yaml`)**: - ```yaml - enable_cache: False - parameters: - dataset_name: "small_dataset" - steps: - load_data: - enable_cache: False - ``` +1. **Configuration Files** +2. **Code Implementation** +3. **Environment Variables** - **Applying Configuration**: - ```python - from zenml import step, pipeline +#### 1. Using Configuration Files +ZenML supports YAML configuration files to specify pipeline and step settings. For example, a development configuration might look like this: - @step - def load_data(dataset_name: str) -> dict: - ... +```yaml +enable_cache: False +parameters: + dataset_name: "small_dataset" +steps: + load_data: + enable_cache: False +``` - @pipeline - def ml_pipeline(dataset_name: str): - load_data(dataset_name) +To apply this configuration: - if __name__ == "__main__": - ml_pipeline.with_options(config_path="path/to/config.yaml")() - ``` +```python +from zenml import step, pipeline -2. **Implementing Variants in Code**: Directly manage pipeline variants using flags in your code. +@step +def load_data(dataset_name: str) -> dict: + ... - ```python - import os - from zenml import step, pipeline +@pipeline +def ml_pipeline(dataset_name: str): + load_data(dataset_name) - @step - def load_data(dataset_name: str) -> dict: - ... +if __name__ == "__main__": + ml_pipeline.with_options(config_path="path/to/config.yaml")() +``` - @pipeline - def ml_pipeline(is_dev: bool = False): - dataset = "small_dataset" if is_dev else "full_dataset" - load_data(dataset) +Separate configuration files can be created for development (`config_dev.yaml`) and production (`config_prod.yaml`). - if __name__ == "__main__": - is_dev = os.environ.get("ZENML_ENVIRONMENT") == "dev" - ml_pipeline(is_dev=is_dev) - ``` +#### 2. Implementing Variants in Code +You can also define pipeline variants directly in your code: -3. **Using Environment Variables**: Determine the pipeline variant based on environment variables. +```python +import os +from zenml import step, pipeline - ```python - import os +@step +def load_data(dataset_name: str) -> dict: + ... - config_path = "config_dev.yaml" if os.environ.get("ZENML_ENVIRONMENT") == "dev" else "config_prod.yaml" - ml_pipeline.with_options(config_path=config_path)() - ``` +@pipeline +def ml_pipeline(is_dev: bool = False): + dataset = "small_dataset" if is_dev else "full_dataset" + load_data(dataset) + +if __name__ == "__main__": + is_dev = os.environ.get("ZENML_ENVIRONMENT") == "dev" + ml_pipeline(is_dev=is_dev) +``` + +This method uses a boolean flag to switch between variants. + +#### 3. Using Environment Variables +Environment variables can dictate which configuration to use: + +```python +import os + +config_path = "config_dev.yaml" if os.environ.get("ZENML_ENVIRONMENT") == "dev" else "config_prod.yaml" +ml_pipeline.with_options(config_path=config_path)() +``` + +Run the pipeline with: +- `ZENML_ENVIRONMENT=dev python run.py` +- `ZENML_ENVIRONMENT=prod python run.py` - **Run Commands**: - - For development: `ZENML_ENVIRONMENT=dev python run.py` - - For production: `ZENML_ENVIRONMENT=prod python run.py` +#### Development Variant Considerations +For faster iteration in development, consider: +- Smaller datasets +- Local execution stack +- Reduced training epochs +- Decreased batch size +- Smaller base models -### Development Variant Considerations -When designing a development variant, optimize for speed and debugging: -- Use smaller datasets -- Specify a local execution stack -- Reduce training epochs and batch size -- Use smaller base models +Example configuration: -**Example in Configuration**: ```yaml parameters: dataset_path: "data/small_dataset.csv" @@ -12645,7 +12566,8 @@ batch_size: 16 stack: local_stack ``` -**Example in Code**: +Or in code: + ```python @pipeline def ml_pipeline(is_dev: bool = False): @@ -12657,91 +12579,98 @@ def ml_pipeline(is_dev: bool = False): train_model(epochs=epochs, batch_size=batch_size) ``` -By implementing these strategies, developers can efficiently test and debug locally while maintaining a comprehensive production pipeline. +By creating these variants, you can efficiently test and debug locally while ensuring a comprehensive setup for production, enhancing your development workflow. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md === -### Summary of ZenML Pipeline Management Documentation +### Summary of ZenML Pipeline Cleanliness -#### Keeping Your Dashboard Clean -To maintain a clean development environment while using ZenML, consider the following strategies: +**Objective**: Maintain a clean pipeline environment during development. -1. **Run Locally**: Disconnect from the remote server and run a local server to avoid clutter. - ```bash - zenml login --local - ``` - Reconnect to the remote server using: - ```bash - zenml login - ``` +#### 1. Running Locally +To avoid cluttering a shared server, disconnect and run a local server: +```bash +zenml login --local +``` +Reconnect to the remote server with: +```bash +zenml login +``` -2. **Unlisted Runs**: Create pipeline runs without associating them with a pipeline to keep the dashboard focused. - ```python - pipeline_instance.run(unlisted=True) - ``` +#### 2. Pipeline Runs +- **Unlisted Runs**: Create runs without associating them with a pipeline: + ```python + pipeline_instance.run(unlisted=True) + ``` + These runs won't appear on the pipeline's dashboard. -3. **Deleting Pipeline Runs**: - - To delete a specific run: - ```bash - zenml pipeline runs delete - ``` - - To delete all runs from the last 24 hours: - ```python - #!/usr/bin/env python3 - import datetime - from zenml.client import Client +- **Deleting Pipeline Runs**: To delete a specific run: + ```bash + zenml pipeline runs delete + ``` + To delete all runs from the last 24 hours: + ```python + #!/usr/bin/env python3 + import datetime + from zenml.client import Client - def delete_recent_pipeline_runs(): - zc = Client() - time_filter = (datetime.datetime.utcnow() - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") - recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") - for run in recent_runs: - zc.delete_pipeline_run(run.id) - print(f"Deleted {len(recent_runs)} pipeline runs.") + def delete_recent_pipeline_runs(): + zc = Client() + time_filter = (datetime.datetime.utcnow() - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") + recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") + for run in recent_runs: + zc.delete_pipeline_run(run.id) + print(f"Deleted {len(recent_runs)} pipeline runs.") - if __name__ == "__main__": - delete_recent_pipeline_runs() - ``` + if __name__ == "__main__": + delete_recent_pipeline_runs() + ``` -4. **Deleting Pipelines**: Remove unnecessary pipelines with: - ```bash - zenml pipeline delete - ``` +#### 3. Pipelines +- **Deleting Pipelines**: Remove unnecessary pipelines: + ```bash + zenml pipeline delete + ``` -5. **Unique Pipeline Names**: Assign unique names to pipeline runs for differentiation: - ```python - training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") - training_pipeline() - ``` +- **Unique Pipeline Names**: Assign custom names to differentiate runs: + ```python + training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") + training_pipeline() + ``` -6. **Model Management**: To delete a model: - ```bash - zenml model delete - ``` +#### 4. Models +To delete a model: +```bash +zenml model delete +``` -7. **Artifact Management**: - - Prune unreferenced artifacts: - ```bash - zenml artifact prune - ``` - - Use flags `--only-artifact` and `--only-metadata` for selective deletion. +#### 5. Artifacts +- **Pruning Artifacts**: Remove unreferenced artifacts: + ```bash + zenml artifact prune + ``` + Use flags `--only-artifact` or `--only-metadata` for specific deletions. -8. **Cleaning Environment**: Use the `zenml clean` command to remove all local pipelines, runs, and artifacts: - ```bash - zenml clean --local - ``` +#### 6. Cleaning Environment +For a complete reset of local data: +```bash +zenml clean +``` +Use `--local` to delete local files related to the active stack. This command does not affect server data. -By following these practices, you can maintain an organized pipeline dashboard and focus on relevant runs for your project. +By utilizing these strategies, you can keep your ZenML dashboard organized and focused on relevant runs. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/README.md === -# Develop Locally +### Develop Locally + +This section outlines best practices for developing pipelines locally, enabling faster iteration and cost-effective testing. Developers typically work with a smaller subset of data or synthetic data. ZenML supports local development, guiding users through the process of building pipelines locally before deploying them on more powerful remote hardware. -This section outlines best practices for developing pipelines locally, allowing for faster iteration and cost-effective testing. Developers often work with a smaller subset of data or synthetic data. ZenML supports local development, guiding users on how to work locally before transitioning to more powerful remote hardware for execution. +![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== @@ -12750,87 +12679,110 @@ This section outlines best practices for developing pipelines locally, allowing ### Summary: Inspecting a Finished Pipeline Run and Its Outputs #### Overview -After completing a pipeline run, users can access various information programmatically, including loading artifacts, accessing metadata, and inspecting the lineage of runs and artifacts. The hierarchy consists of pipelines, runs, steps, and artifacts, with multiple 1-to-N relationships. +After a pipeline run is completed, users can access various information programmatically, including loading artifacts, accessing metadata, and inspecting the lineage of pipeline runs. + +#### Pipeline Hierarchy +- **Pipelines** have multiple **Runs**. +- Each **Run** consists of multiple **Steps**. +- Each **Step** produces multiple **Artifacts**. + +```mermaid +flowchart LR + pipelines -->|1:N| runs + runs -->|1:N| steps + steps -->|1:N| artifacts +``` #### Fetching Pipelines -- **Get a Pipeline**: Use `Client.get_pipeline()` to fetch a specific pipeline after it has been run. +- **Get a specific pipeline**: ```python from zenml.client import Client pipeline_model = Client().get_pipeline("first_pipeline") ``` -- **List All Pipelines**: Use `Client.list_pipelines()` or the CLI command `zenml pipeline list` to retrieve all registered pipelines. - ```python - pipelines = Client().list_pipelines() - ``` +- **List all pipelines**: + - **Python**: + ```python + pipelines = Client().list_pipelines() + ``` + - **CLI**: + ```shell + zenml pipeline list + ``` -#### Runs -- **Get All Runs**: Access all runs of a pipeline using the `runs` property. +#### Pipeline Runs +- **Get all runs of a pipeline**: ```python runs = pipeline_model.runs ``` -- **Get Last Run**: Retrieve the most recent run using `last_run` or the first element of `runs`. +- **Get the last run**: ```python last_run = pipeline_model.last_run # OR: pipeline_model.runs[0] ``` -- **Get Latest Run**: Execute the pipeline to get the latest run. +- **Execute a pipeline and get the latest run**: ```python run = training_pipeline() ``` -- **Get a Specific Run**: Use `Client.get_pipeline_run()` to fetch a specific run by its ID. +- **Get a specific run**: ```python pipeline_run = Client().get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") ``` #### Run Information -Each run contains useful information: -- **Status**: Access the run's status. +- **Status**: ```python - status = run.status + status = run.status # Possible states: initialized, failed, completed, running, cached ``` -- **Configuration**: Retrieve pipeline configuration settings. +- **Configuration**: ```python pipeline_config = run.config + pipeline_settings = run.config.settings ``` -- **Component-Specific Metadata**: Access additional metadata like orchestrator URLs. +- **Component-Specific Metadata**: ```python run_metadata = run.run_metadata + orchestrator_url = run_metadata["orchestrator_url"].value ``` #### Steps -- **Access Steps**: Use the `steps` attribute to get all steps of a run. +- **Get all steps of a run**: ```python steps = run.steps + step = run.steps["first_step"] ``` #### Artifacts -- **Output Artifacts**: Access output artifacts via `outputs` and `inputs` properties. +- **Access output artifacts**: ```python - output = step.outputs["output_name"] + output = step.outputs["output_name"] # or step.output for single output + my_pytorch_model = output.load() ``` -- **Fetch Artifacts Directly**: Use `Client.get_artifact()` to retrieve artifacts. +- **Fetch artifacts directly**: ```python artifact = Client().get_artifact('iris_dataset') + output = artifact.versions['2022'] ``` -- **Artifact Metadata**: Access metadata for artifacts. +#### Artifact Information +- **Metadata**: ```python output_metadata = output.run_metadata + storage_size_in_bytes = output_metadata["storage_size"].value ``` -- **Visualizations**: Use `output.visualize()` for visualizations in Jupyter notebooks. +- **Visualizations**: ```python output.visualize() ``` #### Fetching Information During Run Execution -You can fetch previous runs within a running pipeline step. +To fetch information during a running pipeline: ```python from zenml import get_step_context from zenml.client import Client @@ -12839,17 +12791,16 @@ from zenml.client import Client def my_step(): current_run_name = get_step_context().pipeline_run.name current_run = Client().get_pipeline_run(current_run_name) - previous_run = current_run.pipeline.runs[1] + previous_run = current_run.pipeline.runs[1] # index 0 is current ``` #### Code Example -A complete example to load a model from a pipeline: +Combining concepts into a simple script: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split -from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.client import Client @@ -12857,25 +12808,24 @@ from zenml.client import Client @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) - X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) - return X_train, X_test, y_train, y_test + return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step -def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: - model = SVC(gamma=gamma) - model.fit(X_train.to_numpy(), y_train.to_numpy()) - return model, model.score(X_train.to_numpy(), y_train.to_numpy()) +def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[SVC, "trained_model"], Annotated[float, "training_acc"]]: + model = SVC(gamma=gamma).fit(X_train, y_train) + return model, model.score(X_train, y_train) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() - svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) + svc_trainer(X_train=X_train, y_train=y_train, gamma=gamma) if __name__ == "__main__": last_run = training_pipeline() model = last_run.steps["svc_trainer"].outputs["trained_model"].load() ``` -This summary captures the essential technical details and code snippets necessary for understanding how to inspect and interact with pipeline runs and their outputs in ZenML. + +This summary captures the essential technical details and code snippets for inspecting pipeline runs and their outputs in ZenML. ================================================== @@ -12883,15 +12833,16 @@ This summary captures the essential technical details and code snippets necessar ### ZenML Step Retry Configuration -ZenML allows automatic retries for steps that fail, which is useful for handling intermittent issues, especially on GPU-backed hardware. You can configure the following parameters for step retries: +ZenML offers a built-in mechanism to automatically retry steps upon failure, useful for handling intermittent issues. This is particularly beneficial when using GPU-backed hardware, where resource availability may fluctuate. -- **max_retries:** Maximum number of retry attempts. +#### Retry Parameters +You can configure the following parameters for step retries: +- **max_retries:** Maximum retry attempts on failure. - **delay:** Initial delay (in seconds) before the first retry. - **backoff:** Multiplier for the delay after each retry. #### Using the @step Decorator - -You can define the retry configuration directly in your step using the `@step` decorator: +You can define the retry configuration in your step as shown below: ```python from zenml.config.retry_config import StepRetryConfig @@ -12905,19 +12856,83 @@ from zenml.config.retry_config import StepRetryConfig ) def my_step() -> None: raise Exception("This is a test exception") + +steps: + my_step: + retry: + max_retries: 3 + delay: 10 + backoff: 2 ``` -**Note:** Infinite retries are not supported. ZenML enforces an internal maximum to prevent infinite loops. Set a reasonable `max_retries` based on your use case. +**Note:** Infinite retries are not supported. Setting `max_retries` to a high value or omitting it will still enforce an internal limit to prevent infinite loops. It's advisable to set a reasonable `max_retries` based on your use case. -### See Also +### Related Documentation - [Failure/Success Hooks](use-failure-success-hooks.md) - [Configure Pipelines](../../pipeline-development/use-configuration-files/how-to-use-config.md) ================================================== +=== File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md === + +### Summary of Fan-in and Fan-out Patterns in ZenML + +**Overview:** +The fan-out/fan-in pattern is a pipeline architecture that splits a single step into multiple parallel operations (fan-out) and consolidates the results back into a single step (fan-in). This pattern enhances parallel processing, distributed workloads, and data transformations. + +**Example Code:** +```python +from zenml import step, get_step_context, pipeline +from zenml.client import Client + +@step +def load_step() -> str: + return "Hello from ZenML!" + +@step +def process_step(input_data: str) -> str: + return input_data + +@step +def combine_step(step_prefix: str, output_name: str) -> None: + run_name = get_step_context().pipeline_run.name + run = Client().get_pipeline_run(run_name) + + processed_results = {step_info.name: step_info.outputs[output_name][0].load() + for step_name, step_info in run.steps.items() if step_name.startswith(step_prefix)} + + print(",".join([f"{k}: {v}" for k, v in processed_results.items()])) + +@pipeline(enable_cache=False) +def fan_out_fan_in_pipeline(parallel_count: int) -> None: + input_data = load_step() + after = [process_step(input_data, id=f"process_{i}") for i in range(parallel_count)] + combine_step(step_prefix="process_", output_name="output", after=after) + +fan_out_fan_in_pipeline(parallel_count=8) +``` + +**Use Cases:** +- Parallel data processing +- Distributed model training +- Ensemble methods +- Batch processing +- Data validation across multiple sources +- Hyperparameter tuning + +**Important Notes:** +- The fan-in step requires using the ZenML Client to query results from parallel steps. +- Limitations: + 1. Steps may run sequentially if the orchestrator does not support parallel execution. + 2. The number of steps must be predetermined; dynamic step creation is not supported. + +This pattern is beneficial for optimizing resource utilization and managing complex workflows effectively. + +================================================== + === File: docs/book/how-to/pipeline-development/build-pipelines/get-past-pipeline-step-runs.md === -To retrieve past pipeline or step runs in ZenML, use the `get_pipeline` method along with the `last_run` property or by indexing into the runs. Here’s a concise example: +To retrieve past pipeline or step runs in ZenML, use the `get_pipeline` method along with the `last_run` property or by indexing into the runs. Here's a concise example: ```python from zenml.client import Client @@ -12942,7 +12957,7 @@ This code demonstrates how to access the latest run and the first run of a speci # Tagging Pipeline Runs -You can specify tags for your pipeline runs in the following ways: +You can specify tags for your pipeline runs in three ways: 1. **Configuration File**: ```yaml @@ -12951,107 +12966,84 @@ You can specify tags for your pipeline runs in the following ways: - tag_in_config_file ``` -2. **In Code**: - - Using the `@pipeline` decorator: +2. **Using the `@pipeline` Decorator**: ```python @pipeline(tags=["tag_on_decorator"]) def my_pipeline(): ... ``` - - Using the `with_options` method: +3. **Using `with_options` Method**: ```python my_pipeline = my_pipeline.with_options(tags=["tag_on_with_options"]) ``` -When you run the pipeline, tags from all specified locations will be merged and applied to the pipeline run. +When you run the pipeline, tags from all specified locations will be merged and applied to the run. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md === -### Hyperparameter Tuning with ZenML - -**Overview**: Hyperparameter tuning is not fully supported in ZenML yet, but it can be implemented using pipelines. This guide provides a basic example of how to perform hyperparameter tuning through a grid search. - -#### Basic Pipeline Example -A simple pipeline can iterate over hyperparameters, such as learning rates, using the following code: - -```python -@pipeline -def my_pipeline(step_count: int) -> None: - data = load_data_step() - after = [] - for i in range(step_count): - train_step(data, learning_rate=i * 0.0001, id=f"train_step_{i}") - after.append(f"train_step_{i}") - model = select_model_step(..., after=after) -``` - -This example demonstrates a grid search over a single dimension (learning rate). The `select_model_step` identifies the best-performing hyperparameters after all training steps. - -#### E2E Example -In the E2E example, the hyperparameter tuning stage includes a loop that executes `hp_tuning_single_search` for each model configuration, followed by `hp_tuning_select_best_model` to determine the best model configuration: - -```python -after = [] -search_steps_prefix = "hp_tuning_search_" -for i, model_search_configuration in enumerate(MetaConfig.model_search_space): - step_name = f"{search_steps_prefix}{i}" - hp_tuning_single_search( - model_metadata=ExternalArtifact(value=model_search_configuration), - id=step_name, - dataset_trn=dataset_trn, - dataset_tst=dataset_tst, - target=target, - ) - after.append(step_name) +### Summary of Hyperparameter Tuning with ZenML -best_model_config = hp_tuning_select_best_model( - search_steps_prefix=search_steps_prefix, after=after -) -``` +This documentation describes how to perform hyperparameter tuning using ZenML through a simple pipeline that implements a basic grid search for different learning rates. The process involves two main steps: `train_step` and `selection_step`. -#### Challenges -Currently, ZenML does not allow passing a variable number of artifacts into a step programmatically. The `select_model_step` must query artifacts produced by previous steps using the ZenML Client: +#### Key Components: -```python -from zenml import step, get_step_context -from zenml.client import Client +1. **Train Step**: + - Trains a model using a specified learning rate. + - Returns the trained model. + ```python + @step + def train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]: + return ... # Train model with learning rate + ``` -@step -def select_model_step(): - run_name = get_step_context().pipeline_run.name - run = Client().get_pipeline_run(run_name) +2. **Selection Step**: + - Evaluates trained models to determine the best performing hyperparameters. + - Queries all artifacts produced by previous steps using ZenML Client. + ```python + @step + def selection_step(step_prefix: str, output_name: str) -> None: + run_name = get_step_context().pipeline_run.name + run = Client().get_pipeline_run(run_name) + trained_models_by_lr = {} + for step_name, step_info in run.steps.items(): + if step_name.startswith(step_prefix): + model = step_info.outputs[output_name][0].load() + lr = step_info.config.parameters["learning_rate"] + trained_models_by_lr[lr] = model + for lr, model in trained_models_by_lr.items(): + ... # Evaluate models + ``` + +3. **Pipeline Definition**: + - Constructs the pipeline to execute multiple training steps followed by the selection step. + ```python + @pipeline + def my_pipeline(step_count: int) -> None: + after = [] + for i in range(step_count): + train_step(learning_rate=i * 0.0001, id=f"train_step_{i}") + after.append(f"train_step_{i}") + selection_step(step_prefix="train_step_", output_name=model_output_name, after=after) - trained_models_by_lr = {} - for step_name, step in run.steps.items(): - if step_name.startswith("train_step"): - for output_name, output in step.outputs.items(): - if output_name == "": - model = output.load() - lr = step.config.parameters["learning_rate"] - trained_models_by_lr[lr] = model - - # Evaluate models to find the best one - for lr, model in trained_models_by_lr.items(): - ... -``` + my_pipeline(step_count=4) + ``` -#### Additional Resources -For further implementation details, refer to the files in the `steps/hp_tuning` folder, which include: -- `hp_tuning_single_search(...)`: Performs randomized hyperparameter search. -- `hp_tuning_select_best_model(...)`: Identifies the best hyperparameters based on previous search results. +#### Important Notes: +- The current limitation is that a variable number of artifacts cannot be passed into a step programmatically; hence, the selection step must query artifacts using the ZenML Client. +- Additional resources include example implementations for hyperparameter tuning, such as randomized search and selection of the best model based on defined metrics. -This documentation provides a foundational understanding of implementing hyperparameter tuning in ZenML, emphasizing the current limitations and available resources for further exploration. +For further exploration, refer to the [E2E example](https://github.com/zenml-io/zenml/tree/main/examples/e2e) in the ZenML repository. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md === -### Summary of Pipeline Run Naming in ZenML +# Naming Pipeline Runs -Pipeline run names are automatically generated based on the current date and time, as shown in the log output: +Pipeline run names are automatically generated based on the current date and time, as shown in the example: ```bash Pipeline run training_pipeline-2023_05_24-12_41_04_576473 has finished in 3.742s. @@ -13066,12 +13058,12 @@ training_pipeline = training_pipeline.with_options( training_pipeline() ``` -Run names must be unique. To ensure uniqueness, dynamically compute the run name or use placeholders that ZenML replaces. Placeholders can be set in the `@pipeline` decorator or the `pipeline.with_options` function. Standard placeholders include: +Run names must be unique. To ensure uniqueness, compute the run name dynamically or use placeholders that ZenML will replace. Placeholders can be set in the `@pipeline` decorator or the `pipeline.with_options` function. Standard placeholders include: - `{date}`: current date (e.g., `2024_11_27`) -- `{time}`: current UTC time (e.g., `11_07_09_326492`) +- `{time}`: current time in UTC format (e.g., `11_07_09_326492`) -Example of using placeholders in the run name: +Example of using placeholders in a custom run name: ```python training_pipeline = training_pipeline.with_options( @@ -13086,9 +13078,9 @@ training_pipeline() # Reference Environment Variables in ZenML Configurations -ZenML allows referencing environment variables in configurations using the syntax `${ENV_VARIABLE_NAME}`. +ZenML allows referencing environment variables in both code and configuration files using the placeholder syntax `${ENV_VARIABLE_NAME}`. -## In-code Example +## In-Code Example ```python from zenml import step @@ -13106,7 +13098,7 @@ extra: combined_value: prefix_${ENV_VAR}_suffix ``` -This feature enhances flexibility in both code and configuration files. +This feature enhances the flexibility of configurations by allowing dynamic values based on the environment. ================================================== @@ -13114,31 +13106,30 @@ This feature enhances flexibility in both code and configuration files. ### Runtime Configuration of a Pipeline -To run a pipeline with a different configuration, use the `pipeline.with_options` method. There are two primary ways to configure options: +To run a pipeline with a different configuration, use the `pipeline.with_options` method. You can configure options in two ways: -1. **Explicit Configuration**: +1. Explicitly: ```python - pipeline.with_options(steps={"trainer": {"parameters": {"param1": 1}}}) + with_options(steps="trainer", parameters={"param1": 1}) ``` - -2. **YAML File**: +2. By passing a YAML file: ```python - pipeline.with_options(config_file="path_to_yaml_file") + with_options(config_file="path_to_yaml_file") ``` For more details on these options, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/README.md). -**Exception**: To trigger a pipeline from a client or another pipeline, pass the `PipelineRunConfiguration` object. More information is available [here](../../pipeline-development/trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). +**Exception:** To trigger a pipeline from a client or another pipeline, use the `PipelineRunConfiguration` object. More information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/access-secrets-in-a-step.md === -# Accessing Secrets in ZenML Steps +## Accessing Secrets in ZenML Steps -ZenML secrets are collections of **key-value pairs** stored securely in the ZenML secrets store, each identified by a **name** for easy retrieval in pipelines and stacks. For configuration and creation of secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). +ZenML secrets consist of **key-value pairs** stored securely in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. For details on configuring and creating secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). -You can access secrets in your steps using the ZenML `Client` API, which allows you to use secrets for API queries without hard-coding access keys. +You can access secrets within your steps using the ZenML `Client` API, enabling you to use secrets for API queries without hard-coding access keys. ### Example Code ```python @@ -13166,73 +13157,87 @@ def secret_loader() -> None: ### Summary of Parameterization in ZenML Pipelines -**Overview**: Steps and pipelines in ZenML can be parameterized like Python functions. Parameters can be either **artifacts** (outputs from other steps) or **explicit parameters** (values provided during invocation). +#### Overview +Steps and pipelines in ZenML can be parameterized similarly to Python functions. Parameters can be either **artifacts** (outputs from other steps) or **explicit parameters** (values provided during invocation). -#### Key Points: +#### Step Parameters +- **Artifacts**: Outputs from previous steps, used to share data. +- **Parameters**: Explicit values that configure step behavior, requiring JSON-serializable types via Pydantic. For non-JSON-serializable objects (e.g., NumPy arrays), use [External Artifacts](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline). -1. **Parameters for Steps**: - - **Artifacts**: Outputs from previous steps, used to share data. - - **Parameters**: Explicit values not dependent on other steps. Only JSON-serializable values (via Pydantic) can be passed as parameters. For non-JSON-serializable objects (e.g., NumPy arrays), use **External Artifacts**. +#### Example Code +```python +from zenml import step, pipeline -2. **Example Code**: - ```python - from zenml import step, pipeline +@step +def my_step(input_1: int, input_2: int) -> None: + pass - @step - def my_step(input_1: int, input_2: int) -> None: - pass +@pipeline +def my_pipeline(): + int_artifact = some_other_step() + my_step(input_1=int_artifact, input_2=42) +``` - @pipeline - def my_pipeline(): - int_artifact = some_other_step() - my_step(input_1=int_artifact, input_2=42) - ``` +#### YAML Configuration +Parameters can be defined in a YAML file, allowing for easy updates without changing the code: -3. **YAML Configuration**: - - Parameters can be defined in a YAML file for easy updates. - - Example YAML: - ```yaml - parameters: - environment: production - steps: - my_step: - parameters: - input_2: 42 - ``` +**config.yaml** +```yaml +parameters: + environment: production +steps: + my_step: + parameters: + input_2: 42 +``` - - Corresponding Python code: - ```python - @pipeline - def my_pipeline(environment: str): - ... - - if __name__=="__main__": - my_pipeline.with_options(config_paths="config.yaml")() - ``` +**Python Code** +```python +from zenml import step, pipeline -4. **Conflicts in Configuration**: - - Conflicts may arise if parameters are defined in both YAML and code. The system will notify you of such conflicts. - - Example of a conflict: - ```yaml - parameters: - some_param: 24 - steps: - my_step: - parameters: - input_2: 42 - ``` - ```python - @pipeline - def my_pipeline(some_param: int): - my_step(input_1=42, input_2=43) - ``` +@step +def my_step(input_1: int, input_2: int) -> None: + ... -5. **Caching Behavior**: - - Steps are cached only if all parameter values or artifacts match previous executions. If upstream steps are not cached, the step will always execute. +@pipeline +def my_pipeline(environment: str): + ... -#### Additional Resources: -- For more on using configuration files: [Use Configuration Files](use-pipeline-step-parameters.md) -- For details on caching: [Control Caching Behavior](control-caching-behavior.md) +if __name__ == "__main__": + my_pipeline.with_options(config_paths="config.yaml")() +``` + +#### Conflicts in Configuration +Conflicts may arise if parameters are defined in both the YAML file and the code. ZenML will notify you of such conflicts. + +**Example of Conflict** +**config.yaml** +```yaml +parameters: + some_param: 24 +steps: + my_step: + parameters: + input_2: 42 +``` + +**Python Code** +```python +@pipeline +def my_pipeline(some_param: int): + my_step(input_1=42, input_2=43) + +if __name__ == "__main__": + my_pipeline(23) +``` + +#### Caching Behavior +- **Parameters**: A step is cached only if all parameter values match previous executions. +- **Artifacts**: A step is cached only if all input artifacts match previous executions. If upstream steps are not cached, the step will always execute. + +#### Additional Resources +- [Use configuration files to set parameters](use-pipeline-step-parameters.md) +- [How caching works and how to control it](control-caching-behavior.md) ================================================== @@ -13240,11 +13245,9 @@ def secret_loader() -> None: ### Summary: Running Pipelines Asynchronously -By default, pipelines run synchronously, meaning the terminal displays logs in real-time during execution. To run pipelines asynchronously, you can configure the orchestrator to set `synchronous=False`. This can be done either globally or at the pipeline configuration level. - -**Code Example for Asynchronous Pipeline:** +By default, pipelines run synchronously, meaning the terminal displays logs during the pipeline execution. To run pipelines asynchronously, you can configure the orchestrator either globally or at the pipeline level. -1. **Python Code:** +1. **Global Configuration**: Set `synchronous=False` in the orchestrator settings. ```python from zenml import pipeline @@ -13253,7 +13256,7 @@ By default, pipelines run synchronously, meaning the terminal displays logs in r ... ``` -2. **YAML Configuration:** +2. **YAML Configuration**: Modify the YAML config file to set the orchestrator to run asynchronously. ```yaml settings: orchestrator.: @@ -13266,16 +13269,16 @@ For more details on orchestrators, refer to the [orchestrators documentation](.. === File: docs/book/how-to/pipeline-development/build-pipelines/using-a-custom-step-invocation-id.md === -### Summary of Custom Step Invocation ID in ZenML +## Custom Step Invocation ID in ZenML -When invoking a ZenML step in a pipeline, each step is assigned a unique **invocation ID**. This ID can be used to manage the execution order of steps or to retrieve information post-execution. +When invoking a ZenML step in a pipeline, a unique **invocation ID** is assigned. This ID can be used to define the execution order of steps or to fetch information about the invocation post-execution. -#### Key Points: -- The first invocation of a step uses its name as the invocation ID (e.g., `my_step`). -- Subsequent invocations append a suffix (`_2`, `_3`, etc.) to create unique IDs (e.g., `my_step_2`). -- A custom invocation ID can be specified by passing an `id` parameter, which must be unique within the pipeline. +### Key Points: +- The first invocation of a step uses the step name as its ID (e.g., `my_step`). +- Subsequent invocations append a suffix (_2, _3, etc.) to ensure uniqueness (e.g., `my_step_2`). +- A custom invocation ID can be specified by passing an `id` parameter, which must be unique across all invocations in the pipeline. -#### Example Code: +### Example Code: ```python from zenml import pipeline, step @@ -13290,38 +13293,36 @@ def example_pipeline(): my_step(id="my_custom_invocation_id") # Custom ID ``` -This functionality allows for flexible management of step invocations within ZenML pipelines. - ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/compose-pipelines.md === -### Summary of ZenML Pipeline Composition Documentation +### Summary of ZenML Pipeline Composition -**Overview**: ZenML enables the reuse of steps between pipelines to reduce code duplication by allowing the composition of pipelines. +ZenML facilitates the reuse of steps between pipelines by allowing the composition of pipelines, which helps avoid code duplication. -**Key Points**: -- **Pipeline Composition**: Common functionalities can be extracted into separate functions, which can then be reused across different pipelines. -- **Example Code**: - ```python - from zenml import pipeline +#### Example Code - @pipeline - def data_loading_pipeline(mode: str): - data = training_data_loader_step() if mode == "train" else test_data_loader_step() - return preprocessing_step(data) +```python +from zenml import pipeline - @pipeline - def training_pipeline(): - training_data = data_loading_pipeline(mode="train") - model = training_step(data=training_data) - evaluation_step(model=model, data=data_loading_pipeline(mode="test")) - ``` +@pipeline +def data_loading_pipeline(mode: str): + data = training_data_loader_step() if mode == "train" else test_data_loader_step() + return preprocessing_step(data) + +@pipeline +def training_pipeline(): + training_data = data_loading_pipeline(mode="train") + model = training_step(data=training_data) + test_data = data_loading_pipeline(mode="test") + evaluation_step(model=model, data=test_data) +``` -- **Functionality**: The `data_loading_pipeline` acts as a step within the `training_pipeline`, integrating its steps into the parent pipeline. Only the parent pipeline appears in the dashboard. -- **Triggering Pipelines**: For details on triggering one pipeline from another, refer to the advanced usage documentation. +In this example, `data_loading_pipeline` is called within `training_pipeline`, effectively treating it as a step in the latter. Only the parent pipeline appears on the dashboard. For triggering a pipeline from another, refer to the advanced usage documentation. -**Additional Resources**: For more information on orchestrators, see [orchestrators.md](../../../component-guide/orchestrators/orchestrators.md). +#### Additional Resources +- Learn more about orchestrators [here](../../../component-guide/orchestrators/orchestrators.md). ================================================== @@ -13330,12 +13331,12 @@ This functionality allows for flexible management of step invocations within Zen ### Summary of ZenML Failure and Success Hooks Documentation #### Overview -Hooks in ZenML allow actions to be performed after step execution, useful for notifications, logging, or resource cleanup. There are two types of hooks: +Hooks in ZenML allow actions to be performed after the execution of a step, useful for notifications, logging, or resource cleanup. There are two types of hooks: - **`on_failure`**: Triggered when a step fails. - **`on_success`**: Triggered when a step succeeds. #### Defining Hooks -Hooks are defined as callback functions accessible within the pipeline repository. The `on_failure` hook can accept a `BaseException` argument to access the specific exception that caused the failure. +Hooks are defined as callback functions accessible within the pipeline repository. The `on_failure` hook can accept a `BaseException` argument to access the exception that caused the failure. **Example:** ```python @@ -13357,30 +13358,33 @@ def my_successful_step() -> int: ``` #### Pipeline-Level Hooks -Hooks can also be defined at the pipeline level, which apply to all steps within that pipeline. Step-level hooks take precedence over pipeline-level hooks. - -**Example:** +Hooks can also be defined at the pipeline level to apply to all steps: ```python @pipeline(on_failure=on_failure, on_success=on_success) def my_pipeline(...): ... ``` +**Note**: Step-level hooks take precedence over pipeline-level hooks. #### Accessing Step Information -Inside hooks, you can use `get_step_context()` to access information about the current pipeline run or step. +Inside hooks, use `get_step_context()` to access step and pipeline run information. **Example:** ```python -from zenml import get_step_context +from zenml import step, get_step_context def on_failure(exception: BaseException): context = get_step_context() print(context.step_run.name) - print(context.step_run.config.parameters) + print("Step failed!") + +@step(on_failure=on_failure) +def my_step(some_parameter: int = 1): + raise ValueError("My exception") ``` -#### Linking to the Alerter Component -You can use the Alerter component to send notifications within hooks. +#### Using the Alerter Component +Hooks can utilize the Alerter component to send notifications. **Example:** ```python @@ -13391,7 +13395,7 @@ def on_failure(): Client().active_stack.alerter.post(f"{step_name} just failed!") ``` -Standard hooks using the Alerter can be defined as follows: +**Standard Hooks:** ```python from zenml.hooks import alerter_success_hook, alerter_failure_hook @@ -13400,8 +13404,8 @@ def my_step(...): ... ``` -#### Using OpenAI ChatGPT Hook -The OpenAI ChatGPT failure hook generates suggestions for fixing exceptions. Ensure you have the OpenAI integration installed and your API key stored as a ZenML secret. +#### OpenAI ChatGPT Failure Hook +This hook generates potential fixes for exceptions using OpenAI's API. Ensure you have the OpenAI integration installed and your API key stored in a ZenML secret. **Installation:** ```shell @@ -13418,30 +13422,29 @@ def my_step(...): ... ``` -This hook will send a message with suggestions for fixing the failure via the configured alerter. +This hook can provide suggestions to help resolve issues in your code. For GPT-4 users, use `openai_gpt4_alerter_failure_hook`. ### Conclusion -ZenML hooks provide a flexible way to handle step execution outcomes, enabling developers to implement notifications and manage errors effectively. +ZenML hooks provide a flexible way to manage actions based on step outcomes, including notifications and error handling, enhancing the robustness of your pipelines. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-an-individual-step.md === -### Summary of ZenML Step Execution Documentation +## Summary of ZenML Step Execution -To run an individual step in ZenML, call the step like a regular Python function. ZenML will create an unlisted pipeline that executes the step on the active stack, visible in the "Runs" tab. +### Running an Individual Step +To execute a single step in ZenML, call the step like a standard Python function. ZenML will create an unlisted pipeline to run the step on the active stack, which can be viewed in the "Runs" tab of the dashboard. -#### Code Example for Running a Step +### Example Code ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC -from typing import Tuple, Annotated @step(step_operator="") def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: - """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) @@ -13450,19 +13453,18 @@ def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) X_train = pd.DataFrame(...) y_train = pd.Series(...) - -# Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` -#### Running the Step Function Directly +### Running the Step Function Directly To bypass ZenML and run the step function directly, use the `entrypoint(...)` method: + ```python model, train_acc = svc_trainer.entrypoint(X_train=X_train, y_train=y_train) ``` -#### Default Behavior Configuration -To make direct function calls the default behavior, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will allow calling `svc_trainer(...)` to execute without involving the ZenML stack. +### Default Behavior +To set the default behavior to run steps without ZenML, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will make calling `svc_trainer(...)` execute the underlying function directly. ================================================== @@ -13470,15 +13472,14 @@ To make direct function calls the default behavior, set the environment variable ### Deleting Pipelines and Pipeline Runs -#### Delete a Pipeline To delete a pipeline, use either the CLI or the Python SDK: -**CLI:** +#### CLI ```shell zenml pipeline delete ``` -**Python SDK:** +#### Python SDK ```python from zenml.client import Client @@ -13487,7 +13488,8 @@ Client().delete_pipeline() **Note:** Deleting a pipeline does not remove associated runs or artifacts. -To delete multiple pipelines with a common prefix, use the following script: +For deleting multiple pipelines with the same prefix, use the following Python script: + ```python from zenml.client import Client @@ -13498,30 +13500,36 @@ target_pipeline_ids = [p.id for p in pipelines_list.items] if input("Do you really want to delete these pipelines? (y/n): ").lower() == 'y': for pid in target_pipeline_ids: client.delete_pipeline(pid) + print("Deletion complete") +else: + print("Deletion cancelled") ``` -#### Delete a Pipeline Run -To delete a pipeline run, use the CLI or the client: +### Deleting Pipeline Runs + +To delete a pipeline run, use the CLI or the Python SDK: -**CLI:** +#### CLI ```shell zenml pipeline runs delete ``` -**Python SDK:** +#### Python SDK ```python from zenml.client import Client Client().delete_pipeline_run() -``` +``` + +This documentation provides the necessary commands and scripts for deleting pipelines and their runs efficiently. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-execution-order-of-steps.md === -### Summary of ZenML Execution Order Control +# Control Execution Order of Steps in ZenML -ZenML determines the execution order of pipeline steps based on data dependencies. For example, in the following pipeline, `step_3` depends on the outputs of `step_1` and `step_2`, allowing both to execute in parallel before `step_3` starts: +ZenML determines the execution order of pipeline steps based on data dependencies. For example, in the pipeline below, `step_3` relies on outputs from `step_1` and `step_2`, allowing both to run in parallel before `step_3` starts. ```python from zenml import pipeline @@ -13533,7 +13541,7 @@ def example_pipeline(): step_3(step_1_output, step_2_output) ``` -To enforce specific execution order constraints, you can use non-data dependencies by passing invocation IDs. For a single step dependency, use `after="other_step"`; for multiple dependencies, pass a list: +To enforce specific execution constraints, you can use non-data dependencies by specifying invocation IDs. For a single step, use `my_step(after="other_step")`, or for multiple steps, use a list: `my_step(after=["other_step", "other_step_2"])`. Refer to the [documentation](using-a-custom-step-invocation-id.md) for details on invocation IDs. ```python from zenml import pipeline @@ -13545,7 +13553,7 @@ def example_pipeline(): step_3(step_1_output, step_2_output) ``` -In this modified pipeline, `step_1` will only start after `step_2` has completed. For more details on invocation IDs, refer to the [documentation here](using-a-custom-step-invocation-id.md). +In this example, `step_1` will only start after `step_2` has completed. ================================================== @@ -13553,26 +13561,25 @@ In this modified pipeline, `step_1` will only start after `step_2` has completed ### Summary: Scheduling Pipelines in ZenML -**Overview**: This documentation explains how to set, pause, and stop schedules for pipelines in ZenML, noting that scheduling support varies by orchestrator. - #### Supported Orchestrators -| Orchestrator | Scheduling Support | -|----------------------------------|--------------------| -| AirflowOrchestrator | ✅ | -| AzureMLOrchestrator | ✅ | -| DatabricksOrchestrator | ✅ | -| HyperAIOrchestrator | ✅ | -| KubeflowOrchestrator | ✅ | -| KubernetesOrchestrator | ✅ | -| LocalOrchestrator | ⛔️ | -| LocalDockerOrchestrator | ⛔️ | -| SagemakerOrchestrator | ⛔️ | -| SkypilotAWS/ Azure/ GCP/ Lambda | ⛔️ | -| TektonOrchestrator | ⛔️ | -| VertexOrchestrator | ✅ | +Not all orchestrators support scheduling. The following orchestrators do support it: +- AirflowOrchestrator ✅ +- AzureMLOrchestrator ✅ +- DatabricksOrchestrator ✅ +- HyperAIOrchestrator ✅ +- KubeflowOrchestrator ✅ +- KubernetesOrchestrator ✅ +- VertexOrchestrator ✅ + +Orchestrators that do not support scheduling: +- LocalOrchestrator ⛔️ +- LocalDockerOrchestrator ⛔️ +- SagemakerOrchestrator ⛔️ +- Various SkypilotOrchestrators ⛔️ +- TektonOrchestrator ⛔️ #### Setting a Schedule -To schedule a pipeline, you can use either cron expressions or human-readable notations: +To set a schedule for a pipeline, use the `Schedule` class with either cron expressions or human-readable notations: ```python from zenml.config.schedule import Schedule @@ -13583,9 +13590,9 @@ from datetime import datetime def my_pipeline(...): ... -# Using cron expression +# Cron expression example schedule = Schedule(cron_expression="5 14 * * 3") -# Using human-readable notation +# Human-readable example schedule = Schedule(start_time=datetime.now(), interval_second=1800) my_pipeline = my_pipeline.with_options(schedule=schedule) @@ -13595,27 +13602,27 @@ my_pipeline() For more scheduling options, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.schedule.Schedule). #### Pausing/Stopping a Schedule -The method to pause or stop a schedule depends on the orchestrator. For instance, in Kubeflow, you can use its UI for this purpose. Users should consult their orchestrator's documentation for specific instructions. +The method to pause or stop a scheduled run depends on the orchestrator. For instance, in Kubeflow, you can use the UI for this purpose. Always refer to your orchestrator's documentation for specific instructions. -**Important Note**: ZenML schedules the run, but users are responsible for managing the lifecycle of the schedule. Running a pipeline with a schedule multiple times creates distinct scheduled pipelines with unique names. +**Important Note:** ZenML schedules the run, but managing the lifecycle of the schedule is the user's responsibility. Running a pipeline with a schedule multiple times creates separate scheduled pipelines with unique names. #### Additional Resources -- Learn about orchestrators [here](../../../component-guide/orchestrators/orchestrators.md). +For more information on orchestrators, visit [orchestrators.md](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/step-output-typing-and-annotation.md === -### Summary of ZenML Step Output Typing and Annotation +### Summary of Step Output Typing and Annotation in ZenML **Step Outputs Storage**: Outputs from steps are stored in an artifact store. Annotate and name them for clarity. #### Type Annotations - Type annotations are optional but beneficial: - **Type Validation**: Ensures correct input types from upstream steps. - - **Better Serialization**: Allows ZenML to select an appropriate materializer for outputs. Without annotations, it defaults to using Cloudpickle, which is not production-ready due to compatibility issues and potential security risks. + - **Better Serialization**: With annotations, ZenML selects the appropriate materializer for outputs. Custom materializers can be created if built-in options are insufficient. -**Warning**: Using `CloudpickleMaterializer` can expose systems to malicious file uploads. +**Warning**: The built-in `CloudpickleMaterializer` is not production-ready due to compatibility issues across Python versions and potential security risks. #### Code Examples ```python @@ -13631,23 +13638,22 @@ def divide(a: int, b: int) -> Tuple[int, int]: return a // b, a % b ``` -- To enforce type annotations, set `ZENML_ENFORCE_TYPE_ANNOTATIONS=True`. ZenML will raise exceptions for missing annotations. +- Set `ZENML_ENFORCE_TYPE_ANNOTATIONS=True` to enforce type annotations. -#### Tuple vs. Multiple Outputs -- ZenML differentiates single output artifacts from multiple outputs based on the return statement: - - **Multiple Outputs**: If returning a tuple literal (e.g., `return (1, 2)`). - - **Single Output**: Any other return type of `Tuple`. +#### Tuple vs Multiple Outputs +- A step with a tuple literal in the return statement is treated as having multiple outputs. Otherwise, it is a single output of type `Tuple`. + +```python +@step +def my_step() -> Tuple[int, int]: + return 0, 1 # Multiple outputs +``` #### Output Naming -- Default output names: - - Single output: `output` - - Multiple outputs: `output_0, output_1, ...` +- Default names: `output` for single outputs, `output_0`, `output_1`, etc., for multiple outputs. - Custom names can be set using `Annotated`: -```python -from typing_extensions import Annotated -from typing import Tuple -from zenml import step +```python @step def square_root(number: int) -> Annotated[float, "custom_output_name"]: return number ** 0.5 @@ -13660,27 +13666,27 @@ def divide(a: int, b: int) -> Tuple[ return a // b, a % b ``` -**Note**: Without custom names, artifacts are named based on the pipeline and step names. +- Without custom names, artifacts are named based on the pipeline and step names. ### Additional Resources -- For more on output annotation, see [return-multiple-outputs-from-a-step.md](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md). -- For custom data types, refer to [handle-custom-data-types.md](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md). +- For more on output annotation: [Return Multiple Outputs](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md) +- For custom data types: [Handle Custom Data Types](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-caching-behavior.md === -### Caching Behavior in ZenML Pipelines +### ZenML Caching Behavior Summary -By default, ZenML caches steps in pipelines when the code and parameters remain unchanged. +By default, ZenML caches steps in pipelines when the code and parameters remain unchanged. -#### Step and Pipeline Caching Configuration +#### Caching Control -- **Step Level Caching**: +- **Step-Level Caching**: - Use `@step(enable_cache=True)` to enable caching. - - Use `@step(enable_cache=False)` to disable caching, overriding pipeline settings. + - Use `@step(enable_cache=False)` to disable caching, which overrides pipeline-level settings. -- **Pipeline Level Caching**: +- **Pipeline-Level Caching**: - Use `@pipeline(enable_cache=True)` to enable caching for the entire pipeline. #### Example Code @@ -13699,16 +13705,16 @@ def simple_ml_pipeline(parameter: int): ``` #### Dynamic Configuration -Caching settings can be modified after initial configuration: +Caching settings can be modified after initial definition: ```python my_step.configure(enable_cache=...) my_pipeline.configure(enable_cache=...) ``` -#### Additional Resources -For YAML configuration, refer to the [use-configuration-files](../../pipeline-development/use-configuration-files/) documentation. +#### Additional Information +For YAML configuration, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/). -**Note**: Caching occurs only when code and parameters are unchanged. +**Note**: Caching occurs only when both code and parameters remain unchanged. ================================================== @@ -13716,9 +13722,10 @@ For YAML configuration, refer to the [use-configuration-files](../../pipeline-de ### Summary of ZenML Pipeline Documentation -**Overview**: Building pipelines in ZenML is achieved using the `@step` and `@pipeline` decorators. +**Overview**: Building pipelines in ZenML involves using the `@step` and `@pipeline` decorators to define steps and combine them into a pipeline. + +#### Code Example -#### Example Code ```python from zenml import pipeline, step @@ -13738,333 +13745,112 @@ def simple_ml_pipeline(): dataset = load_data() train_model(dataset) -# Run the pipeline +# Execute the pipeline simple_ml_pipeline() ``` #### Execution and Logging -When executed, the pipeline logs its run to the ZenML dashboard, where users can view the Directed Acyclic Graph (DAG) and associated metadata. A ZenML server must be running locally or remotely to access the dashboard. +- When the pipeline is executed, it logs the run to the ZenML dashboard, which requires a ZenML server (local or remote) to view the Directed Acyclic Graph (DAG) and associated metadata. #### Additional Features -For advanced pipeline interactions, refer to the following topics: +For advanced interactions with your pipeline, refer to the following topics: - Configure pipeline/step parameters - Name and annotate step outputs - Control caching behavior -- Run pipeline from another pipeline -- Control execution order of steps - Customize step invocation IDs - Name pipeline runs - Use failure/success hooks - Hyperparameter tuning -- Attach and fetch metadata -- Enable/disable log storing +- Attach and fetch metadata within steps and during pipeline composition +- Enable or disable log storing - Access secrets in a step -For more details, consult the linked documentation sections. +For more details, consult the respective documentation links provided in the original text. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md === -### Server Environment Configuration - -The ZenML server environment is configured using environment variables, which must be set prior to deploying your server instance. For a complete list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). +### How to Configure the Server Environment -![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) +The ZenML server environment is configured using environment variables, which must be set prior to deploying your server instance. For a complete list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/handling-dependencies.md === -# Handling Dependency Conflicts in ZenML +### Handling Dependency Conflicts with ZenML This documentation addresses common issues with conflicting dependencies when using ZenML alongside other libraries. ZenML is designed to be stack- and integration-agnostic, which can lead to dependency conflicts. -## Installing Dependencies -Use the command `zenml integration install ...` to install dependencies for specific integrations. After installing additional dependencies, verify that ZenML requirements are met by running `zenml integration list` and checking for a green tick symbol. +#### Installing Dependencies +Use the command: +```bash +zenml integration install ... +``` +to install dependencies for specific integrations. After installation, verify that requirements are met by running: +```bash +zenml integration list +``` +Look for a green tick symbol next to your desired integrations. -## Suggestions for Resolving Dependency Conflicts +#### Suggestions for Resolving Dependency Conflicts -### Use `pip-compile` -Utilize `pip-compile` from the `pip-tools` package to create a static `requirements.txt` file for consistent dependency management across environments. For guidance, refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management). +1. **Use `pip-compile` for Reproducibility**: + Utilize `pip-compile` from the [pip-tools package](https://pip-tools.readthedocs.io/) to create a static `requirements.txt` file. For more details, refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management). -### Use `pip check` -Run `pip check` to identify compatibility issues among your environment's dependencies. This command will list any conflicts, which may or may not affect your use case. +2. **Run `pip check`**: + Execute `pip check` to identify any dependency conflicts in your environment. -### Known Dependency Issues -Some ZenML integrations have strict dependency requirements. For example, ZenML requires `click~=8.0.3` for its CLI, and using a higher version may lead to unexpected behaviors. +3. **Known Issues**: + ZenML has strict dependency requirements. For example, it requires `click~=8.0.3`. Using a version greater than 8.0.3 may cause issues. -### Manual Dependency Installation -You can manually install dependencies instead of using ZenML's integration installation, although this is not recommended. The command `zenml integration install ...` effectively runs a `pip install ...` command for the integration's dependencies. +#### Manual Dependency Management +You can bypass ZenML's integration installation and manually install dependencies, though this is not recommended. The command `zenml integration install ...` effectively runs `pip install ...` for the integration's dependencies. To manually install dependencies, use: ```bash # Export requirements to a file zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME -# Print requirements to the console +# Print requirements to console zenml integration export-requirements INTEGRATION_NAME ``` -After modifying the requirements, if using a remote orchestrator, update the `DockerSettings` object to ensure compatibility. - -This summary provides essential information on managing dependency conflicts in ZenML while maintaining clarity and brevity. +After modifying the requirements, if using a remote orchestrator, update the `DockerSettings` object accordingly (details [here](../../../how-to/customize-docker-builds/docker-settings-on-a-pipeline.md)). ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/README.md === -# ZenML Environment Configuration Guide - -## Overview of Environments -ZenML deployments involve multiple environments: the **Client Environment**, **ZenML Server Environment**, and **Execution Environments**. - -### Client Environment -- **Purpose**: Compiles ZenML pipelines (e.g., in a `run.py` script). -- **Types**: - - Local development - - CI runner in production - - ZenML Pro runner - - Runner image orchestrated by ZenML server -- **Dependency Management**: Use package managers (e.g., `pip`, `poetry`) to install ZenML and required integrations. -- **Key Steps**: - 1. Compile pipeline via `@pipeline` function. - 2. Create/trigger pipeline and step build environments if remote. - 3. Trigger a run in the orchestrator. -- **Note**: The `@pipeline` function is called only in this environment, focusing on compile time logic. - -### ZenML Server Environment -- **Description**: A FastAPI application managing pipelines and metadata, including the ZenML Dashboard. -- **Dependency Management**: Install dependencies during ZenML deployment, primarily for custom integrations. - -### Execution Environments -- **Local Execution**: No distinct execution environment; client, server, and execution are the same. -- **Remote Execution**: ZenML transfers code to a remote orchestrator using Docker images known as execution environments. -- **Image Configuration**: Starts with a base image containing ZenML and Python, then adds pipeline dependencies. Follow the [containerize your pipeline](../../../how-to/customize-docker-builds/README.md) guide for configuration. - -### Image Builder Environment -- **Default Behavior**: Execution environments are created locally using the local Docker client, requiring Docker installation. -- **Image Builders**: ZenML provides image builders as a stack component to build and push Docker images in a specialized environment. If not configured, the local image builder is used for consistency. - -This guide provides a structured approach to managing dependencies and configurations across the various ZenML environments, ensuring efficient pipeline execution and deployment. - -================================================== - -=== File: docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-python.md === - -### ZenML Template Creation and Execution Guide - -**Feature Availability**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. - -#### Creating a Template - -To create a run template using the ZenML client: - -1. **From a Pipeline Run**: - ```python - from zenml.client import Client - - run = Client().get_pipeline_run() - Client().create_run_template(name=, deployment_id=run.deployment_id) - ``` - - **Note**: Select a pipeline run executed on a **remote stack** (remote orchestrator, artifact store, container registry). - -2. **From Pipeline Definition** (with an active remote stack): - ```python - from zenml import pipeline - - @pipeline - def my_pipeline(): - ... - - template = my_pipeline.create_run_template(name=) - ``` - -#### Running a Template - -To run a created template: - -```python -from zenml.client import Client - -template = Client().get_run_template() -config = template.config_template - -# [OPTIONAL] Modify the config here - -Client().trigger_pipeline(template_id=template.id, run_configuration=config) -``` -- The new run will execute on the same stack as the original. - -#### Advanced Usage: Running a Template from Another Pipeline - -You can trigger one pipeline from another using the following structure: - -```python -import pandas as pd -from zenml import pipeline, step -from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact -from zenml.artifacts.utils import load_artifact -from zenml.client import Client -from zenml.config.pipeline_run_configuration import PipelineRunConfiguration - -@step -def trainer(data_artifact_id: str): - df = load_artifact(data_artifact_id) - -@pipeline -def training_pipeline(): - trainer() - -@step -def load_data() -> pd.DataFrame: - ... - -@step -def trigger_pipeline(df: UnmaterializedArtifact): - run_config = PipelineRunConfiguration( - steps={"trainer": {"parameters": {"data_artifact_id": df.id}}} - ) - Client().trigger_pipeline("training_pipeline", run_configuration=run_config) - -@pipeline -def loads_data_and_triggers_training(): - df = load_data() - trigger_pipeline(df) -``` - -### Additional Resources -- Learn more about [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) in the SDK Docs. -- Explore Unmaterialized Artifacts [here](../../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). - -================================================== - -=== File: docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-cli.md === - -### ZenML CLI: Create a Run Template - -**Feature Availability**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. - -#### Command to Create a Template -You can create a run template using the ZenML CLI with the following command: - -```bash -zenml pipeline create-run-template --name= -``` -- Replace `` with `run.my_pipeline` if your pipeline is defined in `run.py`. - -**Note**: An active **remote stack** is required to execute this command. Alternatively, specify a stack using the `--stack` option. - -================================================== - -=== File: docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-dashboard.md === - -### ZenML Dashboard Template Management - -**Feature Availability**: This functionality is exclusive to [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. - -#### Creating a Template -1. Navigate to a pipeline run executed on a remote stack (with a remote orchestrator, artifact store, and container registry). -2. Click `+ New Template`, provide a name, and select `Create`. - -#### Running a Template -- To run a template: - - Click `Run a Pipeline` on the main `Pipelines` page, or - - Access a specific template page and click `Run Template`. - -You will be directed to the `Run Details` page where you can: -- Upload a `.yaml` configuration file or -- Modify the configuration using the editor. - -Upon execution, a new run will occur on the same stack as the original run. - -================================================== - -=== File: docs/book/how-to/pipeline-development/trigger-pipelines/README.md === - -### Triggering a Pipeline in ZenML - -In ZenML, the primary method to execute a pipeline is by using a pipeline function. Below is a concise example demonstrating how to define and run a simple machine learning pipeline: - -```python -from zenml import step, pipeline - -@step -def load_data() -> dict: - return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} - -@step -def train_model(data: dict) -> None: - total_features = sum(map(sum, data['features'])) - total_labels = sum(data['labels']) - print(f"Trained model using {len(data['features'])} data points. " - f"Feature sum is {total_features}, label sum is {total_labels}.") - -@pipeline -def simple_ml_pipeline(): - dataset = load_data() - train_model(dataset) - -if __name__ == "__main__": - simple_ml_pipeline() -``` - -### Run Templates - -Run Templates are parameterized configurations for ZenML pipelines that can be executed via the ZenML dashboard or the Client/REST API. They serve as customizable blueprints for pipeline runs. Note that this feature is exclusive to ZenML Pro users. - -For more information on using templates, refer to the following resources: -- [Use templates: Python SDK](use-templates-python.md) -- [Use templates: CLI](use-templates-cli.md) -- [Use templates: Dashboard](use-templates-dashboard.md) -- [Use templates: REST API](use-templates-rest-api.md) - -================================================== - -=== File: docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-rest-api.md === - -### ZenML REST API: Creating and Running a Template - -**Note:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. +# Configure Python Environments -#### Prerequisites -To trigger a pipeline via the REST API, you must have at least one run template for the pipeline and know the pipeline name. +ZenML deployments involve multiple environments for managing dependencies and configurations. The environments include: -#### Steps to Trigger a Pipeline +1. **Client Environment (Runner Environment)**: + - Where ZenML pipelines are compiled (e.g., in `run.py`). + - Types include local development, CI runner, ZenML Pro runner, and runner images orchestrated by the ZenML server. + - Use a package manager (e.g., `pip`, `poetry`) to manage dependencies, including the ZenML package and required integrations. + - Key steps to start a pipeline: + 1. Compile an intermediate representation using the `@pipeline` function. + 2. Create or trigger pipeline and step build environments if running remotely. + 3. Trigger a run in the orchestrator. + - The `@pipeline` function is called only in this environment, focusing on compile time rather than execution time. -1. **Get Pipeline ID:** - - Call the endpoint to retrieve the pipeline ID. - ```shell - curl -X 'GET' \ - '/api/v1/pipelines?hydrate=false&name=' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer ' - ``` +2. **ZenML Server Environment**: + - A FastAPI application managing pipelines and metadata, including the ZenML Dashboard. + - Install dependencies during ZenML deployment, especially for custom integrations. Refer to the [server configuration guide](./configure-the-server-environment.md) for more details. -2. **Get Template ID:** - - Use the pipeline ID to fetch available run templates. - ```shell - curl -X 'GET' \ - '/api/v1/run_templates?hydrate=false&pipeline_id=' \ - -H 'accept: application/json' \ - -H 'Authorization: Bearer ' - ``` +3. **Execution Environments**: + - When running locally, the client, server, and execution environments are the same. + - For remote pipelines, ZenML transfers code to the remote orchestrator by building Docker images (execution environments). + - ZenML configures Docker images starting from a base image containing ZenML and Python, adding pipeline dependencies. Follow the [containerize your pipeline guide](../../../how-to/customize-docker-builds/README.md) for Docker image configuration. -3. **Run the Pipeline:** - - Trigger the pipeline using the selected template ID. You can customize the configuration in the request body. - ```shell - curl -X 'POST' \ - '/api/v1/run_templates//runs' \ - -H 'accept: application/json' \ - -H 'Content-Type: application/json' \ - -H 'Authorization: Bearer ' \ - -d '{ - "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}} - }' - ``` +4. **Image Builder Environment**: + - Execution environments are created locally using the Docker client, requiring installation and permissions. + - ZenML provides image builders, a specialized stack component, to build and push Docker images in a different environment. + - If no image builder is configured, ZenML uses the local image builder for consistency. -A successful response indicates that the pipeline has been re-triggered with the specified configuration. - -For details on obtaining a bearer token for API access, refer to the [API Reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). +For more details on specific components, refer to the respective guides linked throughout the documentation. ================================================== @@ -14074,8 +13860,8 @@ For details on obtaining a bearer token for API access, refer to the [API Refere ZenML integrates with Hugging Face's Accelerate library to facilitate distributed training in machine learning pipelines, allowing the use of multiple GPUs or nodes. -#### Using Accelerate in ZenML Steps -To enable distributed execution for training steps, use the `run_with_accelerate` decorator: +#### Using 🤗 Accelerate in Steps +To enable distributed execution in training steps, use the `run_with_accelerate` decorator: ```python from zenml import step, pipeline @@ -14092,7 +13878,7 @@ def training_pipeline(some_param: int, ...): ``` The decorator accepts arguments similar to the `accelerate launch` CLI command. Key arguments include: -- `num_processes`: Number of processes for training. +- `num_processes`: Number of processes for distributed training. - `cpu`: Force training on CPU. - `multi_gpu`: Enable distributed GPU training. - `mixed_precision`: Set mixed precision mode ('no', 'fp16', 'bf16'). @@ -14102,12 +13888,10 @@ The decorator accepts arguments similar to the `accelerate launch` CLI command. 2. Use keyword arguments for calling steps. 3. Misuse raises a `RuntimeError` with guidance. -For a full example, refer to the [llm-lora-finetuning project](https://github.com/zenml-io/zenml-projects/blob/main/llm-lora-finetuning/README.md). - -#### Container Configuration for Accelerate -To run steps with Accelerate, ensure your environment is set up correctly: +#### Environment Configuration +To run Accelerate, ensure the following Docker settings: -1. **Specify a CUDA-enabled parent image**: +1. **CUDA-enabled Parent Image**: ```python from zenml import pipeline from zenml.config import DockerSettings @@ -14119,7 +13903,7 @@ To run steps with Accelerate, ensure your environment is set up correctly: ... ``` -2. **Add Accelerate as a requirement**: +2. **Add Accelerate as a Requirement**: ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", @@ -14132,12 +13916,11 @@ To run steps with Accelerate, ensure your environment is set up correctly: ``` #### Multi-GPU Training -ZenML's Accelerate integration allows training on multiple GPUs, either on a single node or across nodes. Steps to implement include: -- Wrapping the training step with `run_with_accelerate`. -- Configuring Accelerate arguments (e.g., `num_processes`, `multi_gpu`). -- Ensuring training code is compatible with distributed training. +ZenML's Accelerate integration allows training with multiple GPUs on one or more nodes, enhancing performance for large datasets or complex models. Ensure your training step is wrapped with `run_with_accelerate` and configure the necessary arguments. -For assistance, connect with ZenML on [Slack](https://zenml.io/slack). This integration helps scale training processes while maintaining the structure of ZenML pipelines. +For further assistance, connect via [Slack](https://zenml.io/slack). + +This integration helps scale training processes effectively while maintaining ZenML's structured pipeline benefits. ================================================== @@ -14146,10 +13929,10 @@ For assistance, connect with ZenML on [Slack](https://zenml.io/slack). This inte # Summary of GPU Resource Management in ZenML ## Overview -ZenML allows scaling machine learning pipelines to the cloud, leveraging GPU-backed hardware for resource-intensive tasks. This involves specifying resource requirements and ensuring the container environment is properly configured. +ZenML allows scaling machine learning pipelines to the cloud, leveraging GPU-backed hardware to enhance performance. This involves configuring resource settings and ensuring the environment is CUDA-enabled. ## Specifying Resource Requirements -To allocate resources for specific steps in your pipeline, use `ResourceSettings`: +To allocate resources for specific steps, use `ResourceSettings`: ```python from zenml.config import ResourceSettings @@ -14160,7 +13943,7 @@ def training_step(...) -> ...: # train a model ``` -For orchestrators like Skypilot that do not support `ResourceSettings` directly, use orchestrator-specific settings: +For orchestrators like Skypilot that do not support `ResourceSettings`, use specific orchestrator settings: ```python from zenml import step @@ -14173,35 +13956,40 @@ def training_step(...) -> ...: # train a model ``` -Refer to the documentation of each orchestrator for specific resource support. +Refer to each orchestrator's documentation for resource specification support. -### Container Configuration -To utilize GPU capabilities, ensure your container is CUDA-enabled. This requires: +## CUDA-Enabled Container Configuration +To utilize GPU capabilities, ensure your container is CUDA-enabled: 1. **Specify a CUDA-enabled parent image**: - Example for PyTorch: - ```python - from zenml import pipeline - from zenml.config import DockerSettings - docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") +```python +from zenml import pipeline +from zenml.config import DockerSettings - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") + +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` 2. **Add ZenML as a pip requirement**: - ```python - docker_settings = DockerSettings( - parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", - requirements=["zenml==0.39.1", "torchvision"] - ) - ``` -Ensure the chosen image is compatible with both local and remote environments. +```python +docker_settings = DockerSettings( + parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", + requirements=["zenml==0.39.1", "torchvision"] +) + +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` -### Resetting CUDA Cache +Choose images carefully to avoid compatibility issues between local and cloud environments. + +## Resetting CUDA Cache To avoid GPU cache issues, reset the CUDA cache between steps: ```python @@ -14218,99 +14006,92 @@ def training_step(...): # train a model ``` -### Multi-GPU Training +Use this function judiciously to avoid impacting other processes using the same GPU. + +## Multi-GPU Training ZenML supports training across multiple GPUs on a single node. To implement this: -- Create a script that handles training logic for parallel execution across GPUs. +- Create a script for model training that runs in parallel across GPUs. - Call this script from within the ZenML step. -For assistance, connect with ZenML support via Slack. +For assistance, connect with the ZenML community on Slack. -### Additional Resources -- **AWS**: [Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) -- **GCP**: [Deep Learning VM Images](https://cloud.google.com/deep-learning-vm/docs/images) -- **Azure**: [Prebuilt Docker Images](https://learn.microsoft.com/en-us/azure/machine-learning/concept-prebuilt-docker-images-inference) - -This summary captures the essential technical details for configuring GPU resources in ZenML, ensuring optimal performance for machine learning pipelines. +This summary captures the essential technical details for configuring and utilizing GPU resources in ZenML, ensuring efficient execution of machine learning pipelines. ================================================== === File: docs/book/how-to/contribute-to-zenml/implement-a-custom-integration.md === -### Summary: Creating an External Integration for ZenML +### Summary: Creating an External Integration and Contributing to ZenML -ZenML aims to streamline the MLOps landscape by providing numerous integrations with popular tools. This guide outlines the steps to create a custom integration and contribute it to the ZenML codebase. +ZenML aims to organize the MLOps landscape by providing numerous integrations with popular tools. This guide outlines how to create a custom integration for ZenML to share with the community. #### Step 1: Plan Your Integration -Identify the categories relevant to your integration from the ZenML documentation. Note that one integration can belong to multiple categories, such as cloud integrations and container registries. +Identify the categories your integration belongs to by referring to the [ZenML component categories](../../component-guide/README.md). Note that an integration can span multiple categories (e.g., cloud integrations like AWS, GCP, Azure). #### Step 2: Create Stack Component Flavors -Develop individual stack component flavors corresponding to the selected categories. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: +Develop individual stack component flavors for each selected category. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` -Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - -#### Step 3: Create an Integration Class -1. **Clone the ZenML Repository**: Set up your local environment following the [contributing guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). - -2. **Create Integration Directory**: Structure your integration within `src/zenml/integrations/`: +Ensure ZenML is initialized at the root of your repository to resolve the flavor class correctly. Verify the registration with: -``` -/src/zenml/integrations/ - / - ├── artifact-stores/ - ├── flavors/ - └── __init__.py +```shell +zenml orchestrator flavor list ``` -3. **Define Integration Name**: In `zenml/integrations/constants.py`, add: +Refer to the [extensibility documentation](../../component-guide/README.md) for more details. -```python -EXAMPLE_INTEGRATION = "" -``` +#### Step 3: Create an Integration Class +1. **Clone the ZenML Repository**: Set up your local environment by following the [contributing guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). +2. **Create Integration Directory**: Structure your integration within `src/zenml/integrations//`, including subdirectories for artifact stores and flavors. +3. **Define Integration Name**: Add your integration name in `zenml/integrations/constants.py`: -4. **Create Integration Class**: In `src/zenml/integrations//__init__.py`, define your integration class: + ```python + EXAMPLE_INTEGRATION = "" + ``` -```python -from zenml.integrations.constants import -from zenml.integrations.integration import Integration -from zenml.stack import Flavor +4. **Create Integration Class**: In `src/zenml/integrations//__init__.py`, subclass the `Integration` class: -class ExampleIntegration(Integration): - NAME = - REQUIREMENTS = [""] + ```python + from zenml.integrations.constants import + from zenml.integrations.integration import Integration + from zenml.stack import Flavor - @classmethod - def flavors(cls) -> List[Type[Flavor]]: - from zenml.integrations. import - return [] - -ExampleIntegration.check_installation() -``` + class ExampleIntegration(Integration): + NAME = + REQUIREMENTS = [""] -Refer to the [MLflow Integration](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/mlflow/__init__.py) for guidance. + @classmethod + def flavors(cls) -> List[Type[Flavor]]: + from zenml.integrations. import + return [] -5. **Import Integration**: Ensure your integration is imported in `src/zenml/integrations/__init__.py`. + ExampleIntegration.check_installation() + ``` + + Check the [MLflow Integration](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/mlflow/__init__.py) for reference. + +5. **Import the Integration**: Ensure your integration is imported in `src/zenml/integrations/__init__.py`. #### Step 4: Create a Pull Request -Submit a PR to the ZenML repository and await review from core maintainers. Thank you for contributing! +Submit a [PR](https://github.com/zenml-io/zenml/compare) to ZenML for review by core maintainers. Thank you for contributing! ================================================== === File: docs/book/how-to/contribute-to-zenml/README.md === -# Contribute to ZenML +# Contributing to ZenML -Thank you for considering contributing to ZenML! We welcome contributions such as new features, documentation improvements, integrations, or bug reports. +Thank you for considering contributing to ZenML! ## How to Contribute -Refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for best practices and conventions for contributing features, including custom integrations. +We welcome contributions such as new features, documentation improvements, integrations, or bug reports. For detailed guidelines on contributing, including adding custom integrations, refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). -![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) +Your contributions are greatly appreciated! ================================================== @@ -14319,55 +14100,45 @@ Refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/m ### ZenML Server Upgrade Guide #### Overview -Upgrading your ZenML server varies based on deployment method. Always refer to the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. Upgrade promptly after a new version release to benefit from improvements and fixes. +Upgrading your ZenML server varies based on deployment method. Always refer to the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. It's recommended to upgrade promptly after new versions are released for improvements and fixes. #### Upgrade Methods -**1. Docker** - -To upgrade using Docker: -- Ensure data is persisted (on persistent storage or external MySQL) and optionally back up before upgrading. - -**Steps:** -1. Identify your container ID: - ```bash - docker ps - ``` -2. Stop and remove the existing container: +##### Docker +1. **Check Data Persistence**: Ensure your data is stored on persistent storage or an external MySQL instance. Consider backing up before upgrading. +2. **Delete Existing Container**: ```bash + docker ps # Find your container ID docker stop docker rm ``` -3. Deploy the new version of the `zenml-server` image: +3. **Deploy New Version**: ```bash docker run -it -d -p 8080:8080 --name zenmldocker/zenml-server: ``` -**2. Kubernetes with Helm** - -To upgrade your ZenML server Helm release: -- Pull the latest Helm chart from the ZenML GitHub repository: +##### Kubernetes with Helm +1. **Pull Latest Helm Chart**: ```bash git clone https://github.com/zenml-io/zenml.git git pull cd src/zenml/zen_server/deploy/helm/ ``` -- Reuse your `custom-values.yaml` from the previous installation. If unavailable, extract it: +2. **Reuse or Extract Values**: + If you have a `custom-values.yaml` from the previous installation, use it. If not, extract values: ```bash helm -n get values zenml-server > custom-values.yaml ``` -- Upgrade the release: +3. **Upgrade Release**: ```bash helm -n upgrade zenml-server . -f custom-values.yaml ``` -> **Note:** Avoid changing the container image tag in the Helm chart unless necessary, as each Helm chart version is tested with the default image tag. - -#### Important Considerations -- **Downgrading**: Not supported; may cause unexpected behavior. -- **Python Client Version**: Should match the server version. +> **Note**: Avoid changing the container image tag in the Helm chart unless necessary, as it may lead to compatibility issues. -This guide provides essential steps and precautions for upgrading ZenML servers across different deployment methods. +#### Important Notes +- **Downgrading**: Not supported and may cause unexpected behavior. +- **Python Client Version**: Should match the server version for compatibility. ================================================== @@ -14382,20 +14153,20 @@ This guide provides essential steps and precautions for upgrading ZenML servers - **Automated Backups**: Set up daily automated backups using services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. ### Upgrade Strategies -- **Staged Upgrade**: Use two ZenML server instances (old and new) for large organizations, migrating services incrementally. -- **Team Coordination**: Schedule upgrades to minimize disruption, especially if multiple teams share a server. -- **Separate ZenML Servers**: Consider dedicated servers for teams needing different upgrade schedules. ZenML Pro supports multi-tenancy for this purpose. +- **Staged Upgrade**: For large organizations, use two ZenML server instances (old and new) to migrate services gradually. +- **Team Coordination**: Coordinate upgrade timing among teams to minimize disruption. +- **Separate ZenML Servers**: Use dedicated servers for different teams to allow flexible upgrade schedules. ZenML Pro supports multi-tenancy for this purpose. ### Minimizing Downtime -- **Upgrade Timing**: Plan upgrades during low-activity periods. -- **Avoid Mid-Pipeline Upgrades**: Be cautious of upgrades that might interrupt long-running pipelines. +- **Upgrade Timing**: Schedule upgrades during low-activity periods. +- **Avoid Mid-Pipeline Upgrades**: Be cautious of upgrades that may interrupt long-running pipelines. ## Upgrading Your Code ### Testing and Compatibility -- **Local Testing**: Test locally after upgrading (`pip install zenml --upgrade`) and run old pipelines to check compatibility. -- **End-to-End Testing**: Develop simple tests to ensure compatibility with your pipeline code. Refer to ZenML's [test suite](https://github.com/zenml-io/zenml/tree/main/tests) for examples. -- **Artifact Compatibility**: Be cautious with pickle-based materializers; use version-agnostic methods for critical artifacts. Load older artifacts with: +- **Local Testing**: Test your code locally after upgrading (`pip install zenml --upgrade`) to check for compatibility. +- **End-to-End Testing**: Develop simple tests to ensure compatibility with your pipeline code. Utilize ZenML's [extensive test suite](https://github.com/zenml-io/zenml/tree/main/tests) as a reference. +- **Artifact Compatibility**: Be cautious with pickle-based materializers. Test loading older artifacts with the new version: ```python from zenml.client import Client @@ -14406,163 +14177,174 @@ loaded_artifact = artifact.load() ### Dependency Management - **Python Version**: Ensure your Python version is compatible with the new ZenML version. Refer to the [installation guide](../../getting-started/installation.md). -- **External Dependencies**: Check for compatibility of external dependencies with the new ZenML version, as older versions may no longer be supported. Consult the [release notes](https://github.com/zenml-io/zenml/releases). +- **External Dependencies**: Check for compatibility of external dependencies with the new ZenML version, as older versions may no longer be supported. Review the [release notes](https://github.com/zenml-io/zenml/releases). ### Handling API Changes - **Changelog Review**: Review the [changelog](https://github.com/zenml-io/zenml/releases) for breaking changes and new syntax. - **Migration Scripts**: Use available [migration scripts](migration-guide/migration-guide.md) for database schema changes. -By following these best practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server. Adapt these guidelines to fit your specific environment and infrastructure. +By following these best practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server. Adapt these guidelines to your specific environment and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/using-zenml-server-in-prod.md === -### Best Practices for Using ZenML Server in Production +# Best Practices for Using ZenML Server in Production -This guide outlines best practices for setting up a ZenML server in production environments, focusing on performance, scalability, and reliability. +This guide outlines best practices for deploying ZenML server in production environments, focusing on autoscaling, performance optimization, database scaling, ingress setup, monitoring, and backup strategies. -#### Autoscaling Replicas -To handle larger and longer-running pipelines, enable autoscaling based on your deployment environment: +## Autoscaling Replicas +To handle larger and longer-running pipelines, set up autoscaling based on your deployment environment: -- **Kubernetes with Helm**: - ```yaml - autoscaling: - enabled: true - minReplicas: 1 - maxReplicas: 10 - targetCPUUtilizationPercentage: 80 - ``` +### Kubernetes with Helm +Enable autoscaling in the Helm chart: +```yaml +autoscaling: + enabled: true + minReplicas: 1 + maxReplicas: 10 + targetCPUUtilizationPercentage: 80 +``` -- **ECS (AWS)**: - Use the ECS console to enable autoscaling and set task limits in the "Service auto scaling - optional" section. +### ECS (AWS) +1. Access the ECS console. +2. Select your ZenML service. +3. Click "Update Service" and enable autoscaling in the "Service auto scaling - optional" section. -- **Cloud Run (GCP)**: - Set minimum instances to 1 in the "Revision auto-scaling" section. +### Cloud Run (GCP) +1. Go to the Cloud Run console. +2. Select your ZenML service and click "Edit & Deploy new Revision." +3. Set minimum and maximum instances in the "Revision auto-scaling" section. -- **Docker Compose**: - Scale with: - ```bash - docker compose up --scale zenml-server=N - ``` +### Docker Compose +Scale your service using: +```bash +docker compose up --scale zenml-server=N +``` -#### High Connection Pool Values -Increase the thread pool size for better performance: +## High Connection Pool Values +Increase server performance by adjusting thread pool size: ```yaml zenml: threadPoolSize: 100 ``` -Adjust `zenml.database.poolSize` and `zenml.database.maxOverflow` accordingly. +Set `ZENML_SERVER_THREAD_POOL_SIZE` for other deployments, and adjust `zenml.database.poolSize` and `zenml.database.maxOverflow` accordingly. -#### Scaling the Backing Database +## Scaling the Backing Database Monitor and scale your database based on: - **CPU Utilization**: Scale if consistently above 50%. -- **Freeable Memory**: Scale if it drops below 100-200 MB. +- **Freeable Memory**: Scale if below 100-200 MB. -#### Setting Up Ingress/Load Balancer +## Setting Up Ingress/Load Balancer Securely expose your ZenML server: -- **Kubernetes with Helm**: - ```yaml - zenml: - ingress: - enabled: true - className: "nginx" - ``` +### Kubernetes with Helm +Enable ingress: +```yaml +zenml: + ingress: + enabled: true + className: "nginx" +``` -- **ECS**: Use Application Load Balancers as per AWS documentation. +### ECS +Use Application Load Balancers as per [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html). -- **Cloud Run**: Use Cloud Load Balancing as per GCP documentation. +### Cloud Run +Utilize Cloud Load Balancing following [GCP documentation](https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless). -- **Docker Compose**: Set up an NGINX server as a reverse proxy. +### Docker Compose +Set up an NGINX reverse proxy for routing. -#### Monitoring -Implement monitoring tools based on your deployment: +## Monitoring +Monitor your ZenML server using appropriate tools: -- **Kubernetes**: Use Prometheus and Grafana. Example query for CPU utilization: - ``` - sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) - ``` +### Kubernetes with Helm +Use Prometheus and Grafana. Example query for CPU utilization: +``` +sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) +``` -- **ECS**: Utilize CloudWatch for metrics. +### ECS +Utilize [CloudWatch integration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-metrics.html) for metrics. -- **Cloud Run**: Use Cloud Monitoring for metrics. +### Cloud Run +Use [Cloud Monitoring integration](https://cloud.google.com/run/docs/monitoring) for metrics. -#### Backups -Establish a backup strategy to protect critical data: +## Backups +Implement a backup strategy to protect critical data: - Automated backups with a retention period (e.g., 30 days). - Periodic exports to external storage (e.g., S3, GCS). - Manual backups before server upgrades. -These best practices will help ensure a robust and efficient ZenML server deployment in production environments. - ================================================== === File: docs/book/how-to/manage-zenml-server/README.md === # Manage Your ZenML Server -This section provides best practices for upgrading your ZenML server, tips for production use, and troubleshooting guidance. It includes recommended upgrade steps and migration guides for transitioning between specific versions. +This section provides best practices for upgrading your ZenML server, tips for production use, and troubleshooting guidance. It includes recommended upgrade steps and migration guides for specific version transitions. ### Key Points: -- **Upgrading**: Follow the recommended steps for a seamless upgrade. -- **Production Use**: Utilize best practices to ensure optimal performance in production environments. -- **Troubleshooting**: Access troubleshooting tips to resolve common issues. -- **Migration Guides**: Detailed instructions for moving between ZenML versions. +- **Upgrading ZenML Server**: Follow the recommended steps for a smooth upgrade process. +- **Production Use**: Tips for effectively utilizing ZenML in a production environment. +- **Troubleshooting**: Guidance for resolving common issues. +- **Migration Guides**: Instructions for moving between certain ZenML versions. -For visual reference, see the ZenML Scarf image provided. +![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md === -# ZenML Deployment Troubleshooting Guide +### Troubleshooting Tips for ZenML Deployment + +This document provides solutions for common issues encountered during ZenML deployment. + +#### Viewing Logs -## Viewing Logs -### Kubernetes -To view logs for the ZenML server in Kubernetes: +**Kubernetes:** 1. Check running pods: ```bash kubectl -n get pods ``` -2. If pods aren't running, get logs for all pods: +2. If pods are not running, view logs for all pods: ```bash kubectl -n logs -l app.kubernetes.io/name=zenml ``` -3. For specific container logs (use `zenml-db-init` for `Init` state errors): +3. For specific container logs (either `zenml-db-init` or `zenml`): ```bash kubectl -n logs -l app.kubernetes.io/name=zenml -c ``` Use `--tail` to limit lines or `--follow` for real-time logs. -### Docker -To view logs for the ZenML server in Docker: +**Docker:** - For `zenml login --local --docker`: - ```shell - zenml logs -f - ``` + ```shell + zenml logs -f + ``` - For `docker run`: - ```shell - docker logs zenml -f - ``` + ```shell + docker logs zenml -f + ``` - For `docker compose`: - ```shell - docker compose -p zenml logs -f - ``` + ```shell + docker compose -p zenml logs -f + ``` + +#### Fixing Database Connection Problems -## Fixing Database Connection Problems Common MySQL connection issues: - **Access Denied**: Check username/password. -- **Can't Connect**: Verify the host. +- **Can't Connect**: Verify host settings. Test connection: + ```bash + mysql -h -u -p + ``` + For Kubernetes, use `kubectl port-forward` to connect locally. -Test connection with: -```bash -mysql -h -u -p -``` -For Kubernetes, use `kubectl port-forward` to connect locally. +#### Fixing Database Initialization Problems -## Fixing Database Initialization Problems -If encountering `Revision not found` after a version downgrade: +If encountering `Revision not found` after migrating versions: 1. Log in to MySQL: ```bash mysql -h -u -p @@ -14581,40 +14363,40 @@ If encountering `Revision not found` after a version downgrade: === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md === -# ZenML User Authentication Overview +### ZenML User Authentication Overview -You can authenticate clients with the ZenML Server using the ZenML CLI through the following command: +You can authenticate clients with the ZenML Server using the ZenML CLI via the following command: ```bash zenml login https://... ``` -This initiates a browser-based validation process. You can choose to trust your device, which issues a 30-day token, or not, which issues a 24-hour token. To view all authorized devices, use: +This command initiates a browser-based validation process. You can choose to trust the device, which issues a 30-day token, or not, which issues a 24-hour token. To view authorized devices, use: ```bash zenml authorized-device list ``` -To inspect a specific device, use: +For detailed information on a specific device, run: ```bash zenml authorized-device describe ``` -For added security, invalidate a token with: +To enhance security, invalidate a token with: ```bash zenml authorized-device lock ``` ### Summary of Steps: -1. Run `zenml login ` to connect to the ZenML server. +1. Execute `zenml login ` to connect to the ZenML server. 2. Decide whether to trust the device. -3. List permitted devices with `zenml devices list`. -4. Lock a device with `zenml device lock ...`. +3. List authorized devices with `zenml devices list`. +4. Lock a device with `zenml device lock `. ### Important Notice -Using the ZenML CLI ensures secure interaction with your ZenML tenants. Regularly manage device trust levels to maintain security. If a device's trust needs revocation, lock it immediately. Each token is a potential access point to your data and infrastructure. +Using the ZenML CLI ensures secure interaction with ZenML tenants. Regularly manage device trust levels and revoke access by locking devices when necessary, as each token can potentially access sensitive data and infrastructure. ================================================== @@ -14622,49 +14404,50 @@ Using the ZenML CLI ensures secure interaction with your ZenML tenants. Regularl # ZenML Service Account Authentication -To authenticate to a ZenML server in non-interactive environments (e.g., CI/CD, serverless functions), create a service account and use its API key. +To authenticate to a ZenML server in non-interactive environments (e.g., CI/CD, serverless functions), create a service account and use an API key. -## Creating a Service Account +### Creating a Service Account Create a service account and generate an API key: ```bash zenml service-account create ``` -The API key will be displayed in the output and cannot be retrieved later. +The API key is displayed in the output and cannot be retrieved later. -## Connecting with the API Key +### Connecting with API Key You can connect your ZenML client using one of the following methods: -1. **CLI Method**: +1. **Using CLI**: ```bash zenml login https://... --api-key ``` -2. **Environment Variables** (suitable for automated environments): +2. **Setting Environment Variables** (recommended for automated environments): ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` After setting these variables, you can interact with the server without running `zenml login`. -## Managing Service Accounts and API Keys -- List service accounts: +### Managing Service Accounts and API Keys +- **List Service Accounts**: ```bash zenml service-account list ``` - -- List API keys for a specific service account: +- **List API Keys**: ```bash zenml service-account api-key list ``` - -- Describe a service account or API key: +- **Describe Service Account**: ```bash zenml service-account describe + ``` +- **Describe API Key**: + ```bash zenml service-account api-key describe ``` -## API Key Rotation -API keys do not expire, but it is recommended to rotate them regularly: +### API Key Rotation +API keys do not expire, but it's recommended to rotate them regularly: ```bash zenml service-account api-key rotate ``` @@ -14673,15 +14456,15 @@ To retain the old API key for a specified duration (e.g., 60 minutes): zenml service-account api-key rotate --retain 60 ``` -## Deactivating Service Accounts or API Keys -To deactivate a service account or an API key: +### Deactivating Accounts and Keys +To deactivate a service account or API key: ```bash zenml service-account update --active false zenml service-account api-key update --active false ``` -Deactivation takes immediate effect. +Deactivation takes immediate effect on all workloads. -## Summary of Steps +### Summary of Steps 1. Create a service account: `zenml service-account create`. 2. Connect using API key: `zenml login --api-key` or set environment variables. 3. List service accounts: `zenml service-account list`. @@ -14689,8 +14472,8 @@ Deactivation takes immediate effect. 5. Rotate API keys: `zenml service-account api-key rotate`. 6. Deactivate accounts/keys: `zenml service-account update` or `zenml service-account api-key update`. -### Important Notice -API keys are sensitive and should be rotated regularly. Deactivate or delete unused service accounts and API keys to maintain security. +### Security Notice +Regularly rotate API keys and deactivate/delete unused service accounts and keys to protect your data and infrastructure. ================================================== @@ -14698,7 +14481,9 @@ API keys are sensitive and should be rotated regularly. Deactivate or delete unu ### Connecting to ZenML -Once ZenML is deployed, there are multiple methods to connect to the server. For detailed deployment instructions, refer to the [production guide](../../../user-guide/production-guide/deploying-zenml.md). +Once ZenML is deployed, there are multiple methods to connect to the server. + +For detailed deployment instructions, refer to the [production guide](../../../user-guide/production-guide/deploying-zenml.md). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) @@ -14708,23 +14493,25 @@ Once ZenML is deployed, there are multiple methods to connect to the server. For ### Migration Guide: ZenML 0.13.2 to 0.20.0 -**Last updated: 2023-07-24** +**Last Updated: 2023-07-24** + +ZenML 0.20.0 introduces significant architectural changes that may break compatibility with previous versions. This guide outlines the necessary steps to migrate existing ZenML stacks and pipelines. -ZenML 0.20.0 introduces significant architectural changes that may not be backwards compatible. This guide provides instructions for migrating existing ZenML stacks and pipelines. +#### Key Changes -#### Key Changes: -- **Metadata Store**: ZenML now manages its own Metadata Store, eliminating the need for separate components. If using remote Metadata Stores, replace them with a ZenML server deployment. -- **ZenML Dashboard**: A new dashboard is available for all deployments. -- **Profiles Removed**: ZenML Profiles are replaced by Projects. Existing Profiles must be manually migrated. -- **Stack Component Configuration**: Configuration is now decoupled from implementation, requiring updates for custom components. -- **Collaborative Features**: ZenML server allows sharing of stacks and components among users. +1. **Metadata Store**: ZenML now manages its own Metadata Store. If using remote Metadata Stores, transition to a ZenML server deployment. +2. **ZenML Dashboard**: A new dashboard is available for all ZenML deployments. +3. **Profiles Removal**: ZenML Profiles have been replaced by Projects. Existing Profiles must be manually migrated. +4. **Decoupled Configuration**: Stack Component configuration is now separate from implementation, requiring updates for custom components. +5. **Collaborative Features**: The ZenML server allows sharing of Stacks and Components among users. -#### Migration Steps: -1. **Backup Metadata**: Before upgrading, back up existing metadata stores. +#### Migration Steps + +1. **Backup Metadata**: Before upgrading, back up all metadata stores. 2. **Upgrade ZenML**: Use `pip install zenml==0.20.0`. -3. **Connect to ZenML Server**: If using a ZenML server, run `zenml connect`. +3. **Connect to ZenML Server**: If using a server, run `zenml connect`. 4. **Migrate Pipeline Runs**: - - For SQLite: + - For local SQLite: ```bash zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db ``` @@ -14733,36 +14520,40 @@ ZenML 0.20.0 introduces significant architectural changes that may not be backwa zenml pipeline runs migrate DATABASE_NAME --database_type=mysql --mysql_host=URL/TO/MYSQL --mysql_username=MYSQL_USERNAME --mysql_password=MYSQL_PASSWORD ``` -#### New Commands: +#### New Commands + - **Deploy Server**: `zenml deploy --aws` - **Start Local Server**: `zenml up` - **Check Server Status**: `zenml status` -#### Dashboard Access: -Launch the dashboard with: +#### Dashboard Access + +Launch the ZenML Dashboard with: ```bash zenml up ``` -Access it at `http://localhost:8237`. +Access it at `http://127.0.0.1:8237`. + +#### Profile Migration -#### Profile Migration: +To migrate Profiles: 1. Update ZenML to 0.20.0. -2. Connect to a ZenML server. +2. Connect to the ZenML server. 3. Use: ```bash - zenml profile list zenml profile migrate /path/to/profile ``` -#### Configuration Changes: -- **Renamed Classes**: +#### Configuration Changes + +- **Class Renaming**: - `Repository` → `Client` - `BaseStepConfig` → `BaseParameters` -- **New Configuration Approach**: Use `BaseSettings` for pipeline configurations. -- **Deprecation of `enable_xxx` decorators**: Replace with direct settings in the step decorator. +- **New Configuration Paradigm**: Use `BaseSettings` for pipeline configurations, removing previous decorators like `@enable_xxx`. -#### Example Migration: -For a step using MLflow: +#### Example Migration + +For a step configuration: ```python @step( experiment_tracker="mlflow_stack_comp_name", @@ -14775,69 +14566,74 @@ For a step using MLflow: ) ``` -#### Future Changes: +#### Future Changes + - Potential removal of the secrets manager from the stack. - Deprecation of `StepContext`. -#### Reporting Bugs: -For issues or feature requests, contact the ZenML community via [Slack](https://zenml.io/slack) or submit a [GitHub Issue](https://github.com/zenml-io/zenml/issues/new/choose). +#### Reporting Issues + +For bugs or feature requests, engage with the ZenML community on [Slack](https://zenml.io/slack) or submit a [GitHub Issue](https://github.com/zenml-io/zenml/issues/new/choose). + +This guide provides a concise overview of the migration process to ZenML 0.20.0, ensuring critical information is retained while removing redundancy. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md === -### ZenML Migration Guide Summary - -**Purpose:** This guide outlines the process for migrating ZenML code to the latest version, particularly when breaking changes occur. +### Migration Guide for ZenML -**Versioning Overview:** -- **No Breaking Changes:** Upgrades like `0.40.2` to `0.40.3` do not require migration. -- **Minor Breaking Changes:** Upgrades from `0.40.3` to `0.41.0` necessitate consideration of changes. -- **Major Breaking Changes:** Upgrades such as `0.39.1` to `0.40.0` involve significant shifts in code usage. +Migrations are required for ZenML releases with breaking changes, specifically for minor version increments (e.g., `0.X` to `0.Y`) and major version increments (e.g., `0.1` to `0.2`). -**Major Migration Guides:** -Follow these sequential guides for major version upgrades: -1. [0.13.2 → 0.20.0](migration-zero-twenty.md) -2. [0.23.0 → 0.30.0](migration-zero-thirty.md) -3. [0.39.1 → 0.41.0](migration-zero-forty.md) -4. [0.58.2 → 0.60.0](migration-zero-sixty.md) +#### Release Type Examples +- **No Breaking Changes:** `0.40.2` to `0.40.3` (no migration needed) +- **Minor Breaking Changes:** `0.40.3` to `0.41.0` (migration required) +- **Major Breaking Changes:** `0.39.1` to `0.40.0` (significant changes in code usage) -**Release Notes:** For minor breaking changes, refer to the official [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for specific details. +#### Major Migration Guides +Follow these guides sequentially if multiple migrations are needed: +- [0.13.2 → 0.20.0](migration-zero-twenty.md) +- [0.23.0 → 0.30.0](migration-zero-thirty.md) +- [0.39.1 → 0.41.0](migration-zero-forty.md) +- [0.58.2 → 0.60.0](migration-zero-sixty.md) -This summary provides essential information on how to approach ZenML version migrations, emphasizing the importance of following the guides in order for major changes. +#### Release Notes +For minor breaking changes, refer to the official [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for details on changes introduced. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md === -### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2) +### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2 Edition) #### Overview -ZenML has upgraded to Pydantic v2, introducing critical updates that may affect user experience due to stricter validation. Users may encounter new validation errors. For issues, contact us on [GitHub](https://github.com/zenml-io/zenml) or [Slack](https://zenml.io/slack-invite). +ZenML has upgraded to Pydantic v2, introducing critical updates that may affect user experience due to stricter validation and dependency changes. Users may encounter new validation errors that were previously unnoticed. #### Key Dependency Changes - **SQLModel**: Upgraded from `0.0.8` to `0.0.18` for compatibility with Pydantic v2. -- **SQLAlchemy**: Upgraded from v1 to v2. Users of SQLAlchemy should refer to [their migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). +- **SQLAlchemy**: Upgraded from v1 to v2. Users of SQLAlchemy should consult the [migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). #### Pydantic v2 Features -- Improved performance with Rust-based core logic. -- Enhanced model design, configuration, validation, and serialization features. Refer to [Pydantic's migration guide](https://docs.pydantic.dev/2.7/migration/) for details. +- Enhanced performance using Rust. +- New features in model design, configuration, validation, and serialization. +- For detailed changes, refer to the [Pydantic migration guide](https://docs.pydantic.dev/2.7/migration/). #### Integration Changes -- **Airflow**: Removed dependencies due to incompatibility with SQLAlchemy v1. Use ZenML for pipeline creation and a separate environment for Airflow. -- **AWS**: Updated `sagemaker` to version `2.172.0` to support `protobuf` 4. -- **Evidently**: Updated integration to support versions `0.4.16` to `0.4.22`. -- **Feast**: Removed incompatible `redis` dependency. -- **GCP & Kubeflow**: Upgraded `kfp` dependency to v2, removing Pydantic dependency. -- **Great Expectations**: Updated to `great-expectations>=0.17.15,<1.0`. -- **MLflow**: Compatible with both Pydantic versions; manual addition of Pydantic requirement to avoid downgrades. -- **Label Studio**: Updated to support `pydantic` v2. -- **Skypilot**: `skypilot[azure]` integration deactivated due to incompatibility. -- **TensorFlow**: Requires `tensorflow>=2.12.0` to resolve dependency issues with `protobuf`. -- **Tekton**: Updated to use `kfp` v2. +- **Airflow**: Removed dependencies due to Airflow's use of SQLAlchemy v1, allowing ZenML to create Airflow pipelines in a separate environment. Updated docs available [here](../../../component-guide/orchestrators/airflow.md). +- **AWS**: Upgraded `sagemaker` to version `2.172.0` to support `protobuf` 4. +- **Evidently**: Updated to versions `0.4.16` to `0.4.22` for Pydantic v2 compatibility. +- **Feast**: Removed incompatible `redis` dependency, ensuring functionality. +- **GCP**: Upgraded `kfp` dependency to v2, which has no Pydantic dependencies. Functional changes in the vertex step operator may occur. Migration guide [here](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). +- **Great Expectations**: Updated dependency to `great-expectations>=0.17.15,<1.0` for Pydantic v2 support. +- **Kubeflow**: Similar to GCP, upgraded `kfp` to v2. Migration guide [here](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). +- **MLflow**: Compatible with both Pydantic versions, but may downgrade to v1 due to installation order. Deprecation warnings may appear. +- **Label Studio**: Updated to support Pydantic v2 in its 1.0 release. +- **Skypilot**: Compatibility issues with `azurecli` prevent installation of `skypilot[azure]`. Users should remain on the previous ZenML version until resolved. +- **TensorFlow**: Requires `tensorflow >=2.12.0` for compatibility with `protobuf` 4. Issues may arise on Python 3.8; higher Python versions are recommended. +- **Tekton**: Updated to use `kfp` v2, with documentation adjustments. #### Important Note -Upgrading to ZenML 0.60.0 may lead to dependency issues, especially with integrations not supporting Pydantic v2. It is recommended to set up a fresh Python environment for a smoother transition. +Upgrading to ZenML 0.60.0 may lead to dependency issues, especially with integrations not supporting Pydantic v2. It is advisable to set up a fresh Python environment for a smoother transition. ================================================== @@ -14845,23 +14641,19 @@ Upgrading to ZenML 0.60.0 may lead to dependency issues, especially with integra ### Migration Guide: ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1 -**Important Note:** Migrating to ZenML `0.30.0` results in non-reversible database changes; downgrading to `<=0.23.0` is not possible afterward. If using an older version, first consult the [0.20.0 Migration Guide](migration-zero-twenty.md) to avoid migration issues. +**Important Note:** Migrating to `0.30.0` involves non-reversible database changes; downgrading to `<=0.23.0` is not possible afterward. If on an older version, first consult the [0.20.0 Migration Guide](migration-zero-twenty.md) to avoid migration issues. **Key Changes:** -- ZenML `0.30.0` removes the `ml-pipelines-sdk` dependency. +- The `ml-pipelines-sdk` dependency has been removed. - Pipeline runs and artifacts are now stored natively in the ZenML database. **Migration Steps:** -1. Install ZenML `0.30.0`: +1. Install ZenML 0.30.0: ```bash pip install zenml==0.30.0 - ``` -2. Verify installation: - ```bash zenml version # Should return 0.30.0 ``` - -**Database Migration:** This occurs automatically upon executing any `zenml ...` CLI command after installation. +2. The database migration will occur automatically upon executing any `zenml ...` CLI command after installation. ================================================== @@ -14871,10 +14663,9 @@ Upgrading to ZenML 0.60.0 may lead to dependency issues, especially with integra ZenML versions 0.40.0 to 0.41.0 introduced a new syntax for defining steps and pipelines. The old syntax is deprecated and will be removed in future versions. -#### Overview - -**Old Syntax Example:** +#### Old Syntax Example ```python +from typing import Optional from zenml.steps import BaseParameters, Output, StepContext, step from zenml.pipelines import pipeline @@ -14894,10 +14685,10 @@ def my_pipeline(my_step): step_instance = my_step(params=MyStepParameters(param_1=17)) pipeline_instance = my_pipeline(my_step=step_instance) -pipeline_instance.run() +pipeline_instance.run(schedule=Schedule(...)) ``` -**New Syntax Example:** +#### New Syntax Example ```python from typing import Annotated, Optional, Tuple from zenml import get_step_context, pipeline, step @@ -14912,52 +14703,48 @@ def my_step(param_1: int, param_2: Optional[float] = None) -> Tuple[Annotated[in def my_pipeline(): my_step(param_1=17) -my_pipeline.with_options(enable_cache=False)() +my_pipeline = my_pipeline.with_options(enable_cache=False) +my_pipeline() ``` -#### Key Changes - -1. **Defining Steps:** - - Old: Use `BaseParameters` for parameters. - - New: Directly define parameters in the step function or use `pydantic.BaseModel`. - -2. **Calling Steps:** - - Old: Use `my_step.entrypoint()`. - - New: Call `my_step()` directly. +### Key Changes in Syntax -3. **Defining Pipelines:** - - Old: Steps are arguments of the pipeline function. - - New: Call steps directly within the pipeline function. +1. **Step Definition**: + - Old: Parameters defined using `BaseParameters`. + - New: Parameters are directly defined as function arguments or can use `pydantic.BaseModel`. -4. **Configuring Pipelines:** - - Old: Use `pipeline_instance.configure(...)`. - - New: Use `with_options(...)` method. +2. **Pipeline Definition**: + - Old: Steps passed as arguments to the pipeline function. + - New: Steps called directly within the pipeline function. -5. **Running Pipelines:** - - Old: Create an instance and call `run()`. - - New: Call the pipeline function directly. +3. **Running Steps**: + - Old: Used `step.entrypoint()`. + - New: Call the step directly. -6. **Scheduling Pipelines:** - - Old: Use `pipeline_instance.run(schedule=...)`. - - New: Set the schedule with `with_options(...)`. +4. **Pipeline Configuration**: + - Old: Configured using `pipeline_instance.configure(...)`. + - New: Use `with_options(...)`. -7. **Fetching Pipeline Runs:** - - Old: Use `get_runs()`. - - New: Access `last_run` directly from the pipeline instance. +5. **Fetching Outputs**: + - Old: Accessed via `last_run.get_step(...)`. + - New: Accessed via `last_run.steps[...]`. -8. **Controlling Step Execution Order:** - - Old: Use `step.after(...)`. - - New: Pass `after` argument when calling a step. +6. **Step Execution Order**: + - Old: Used `step.after(...)`. + - New: Use `after` argument in the step call. -9. **Defining Steps with Multiple Outputs:** - - Old: Use `Output` class. +7. **Multiple Outputs**: + - Old: Used `Output` class. - New: Use `Tuple` with optional `Annotated` for custom names. -10. **Accessing Run Information:** - - Old: Pass `StepContext` as an argument. - - New: Use `get_step_context()` to access run information. +8. **Accessing Run Information**: + - Old: `StepContext` passed as an argument. + - New: Use `get_step_context()` to access context. -This guide summarizes the key changes and provides code examples for migrating from ZenML 0.39.1 to 0.41.0. For more details on specific topics, refer to the relevant sections in the documentation. +### Important Notes +- The new syntax is more flexible and concise. +- Existing pipelines and steps using the old syntax will continue to work but should be updated to avoid future issues. +- For further details on parameterizing steps, scheduling pipelines, and fetching metadata, refer to the respective ZenML documentation pages. ================================================== @@ -14965,38 +14752,37 @@ This guide summarizes the key changes and provides code examples for migrating f ### Configuring ZenML's Default Behavior -This guide outlines methods to customize ZenML's behavior. Key aspects include: - -- **Configuration Options**: Users can adjust ZenML's settings to suit specific needs. -- **Customization**: Various parameters can be modified to enhance functionality. +This guide outlines methods to configure ZenML's behavior in various situations. -For a visual reference, see the accompanying image. +**Key Points:** +- Users can adapt ZenML's settings to suit specific needs. +- Configuration options allow for customization of default behaviors. -This documentation serves as a foundational resource for adapting ZenML to user requirements. +For further details, refer to the full documentation. ================================================== === File: docs/book/how-to/popular-integrations/skypilot.md === -# Summary of Skypilot with ZenML Documentation +### Summary of Skypilot with ZenML Documentation -## Overview -The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across various cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, offering cost savings and high GPU availability. +**Overview**: +The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, offering cost efficiency and high GPU availability. -## Prerequisites -To use the SkyPilot VM Orchestrator, ensure you have: -- ZenML SkyPilot integration installed for your cloud provider (`zenml integration install skypilot_`) -- Docker running -- A remote artifact store and container registry in your ZenML stack -- A remote ZenML deployment -- Permissions to provision VMs on your cloud provider -- A service connector for authentication (not required for Lambda Labs) +**Prerequisites**: +- Install ZenML SkyPilot integration for your cloud provider: + `zenml integration install skypilot_` +- Docker must be installed and running. +- A remote artifact store and container registry in your ZenML stack. +- A remote ZenML deployment. +- Permissions to provision VMs on your cloud provider. +- A service connector configured for authentication (not required for Lambda Labs). -## Configuration Steps +**Configuration Steps**: -### For AWS, GCP, Azure: -1. Install SkyPilot integration and provider-specific connectors. -2. Register a service connector with necessary permissions. +*For AWS, GCP, Azure*: +1. Install SkyPilot integration and connectors. +2. Register a service connector with necessary credentials. 3. Register the orchestrator and connect it to the service connector. 4. Register and activate a stack with the orchestrator. @@ -15007,10 +14793,10 @@ zenml orchestrator connect --connector -skypilot-v zenml stack register -o ... --set ``` -### For Lambda Labs: +*For Lambda Labs*: 1. Install SkyPilot Lambda integration. 2. Register a secret with your Lambda Labs API key. -3. Register the orchestrator using the API key secret. +3. Register the orchestrator with the API key. 4. Register and activate a stack with the orchestrator. ```bash @@ -15019,11 +14805,11 @@ zenml orchestrator register --flavor vm_lambda --api_key={{l zenml stack register -o ... --set ``` -## Running a Pipeline -Once configured, run any ZenML pipeline using the SkyPilot VM Orchestrator. Each step executes in a Docker container on a provisioned VM. +**Running a Pipeline**: +After configuration, run any ZenML pipeline using the SkyPilot VM Orchestrator, with each step executing in a Docker container on a provisioned VM. -## Additional Configuration -Customize the orchestrator with cloud-specific `Settings` objects: +**Additional Configuration**: +Customize the orchestrator using cloud-specific `Settings` objects: ```python from zenml.integrations.skypilot_.flavors.skypilot_orchestrator__vm_flavor import SkypilotOrchestratorSettings @@ -15039,7 +14825,7 @@ skypilot_settings = SkypilotOrchestratorSettings( @pipeline(settings={"orchestrator": skypilot_settings}) ``` -You can also configure resources for individual steps: +Configure resources per step: ```python high_resource_settings = SkypilotOrchestratorSettings(...) @@ -15049,51 +14835,47 @@ def resource_intensive_step(): ... ``` -For further details and advanced options, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). +For detailed options, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). ================================================== === File: docs/book/how-to/popular-integrations/kubernetes.md === -### ZenML Kubernetes Orchestrator Documentation Summary +### Summary of ZenML Kubernetes Orchestrator Documentation -The ZenML Kubernetes Orchestrator enables deployment of ML pipelines on a Kubernetes cluster without requiring Kubernetes code. It serves as a lightweight alternative to more complex orchestrators like Airflow or Kubeflow. +**Overview**: The ZenML Kubernetes Orchestrator enables deployment of ML pipelines on a Kubernetes cluster without needing to write Kubernetes code, serving as a simpler alternative to tools like Airflow or Kubeflow. -#### Prerequisites -To use the Kubernetes Orchestrator, ensure you have: +**Prerequisites**: - ZenML `kubernetes` integration installed: `zenml integration install kubernetes` -- Docker installed and running -- `kubectl` installed -- A remote artifact store and container registry in your ZenML stack -- A deployed Kubernetes cluster -- (Optional) Configured `kubectl` context pointing to the cluster - -#### Deploying the Orchestrator -You need a Kubernetes cluster to run the orchestrator. Various deployment methods exist across cloud providers or custom infrastructure; refer to the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for more information. +- Docker and `kubectl` installed +- Remote artifact store and container registry in ZenML stack +- Deployed Kubernetes cluster +- Configured `kubectl` context (optional) -#### Configuring the Orchestrator -Configuration can be done in two ways: +**Deployment Steps**: +1. **Register the Orchestrator**: + - Using a Service Connector (recommended for cloud-managed clusters): + ```bash + zenml orchestrator register --flavor kubernetes + zenml service-connector list-resources --resource-type kubernetes-cluster -e + zenml orchestrator connect --connector + zenml stack register -o ... --set + ``` -1. **Using a Service Connector** (recommended for cloud-managed clusters): - ```bash - zenml orchestrator register --flavor kubernetes - zenml service-connector list-resources --resource-type kubernetes-cluster -e - zenml orchestrator connect --connector - zenml stack register -o ... --set - ``` + - Configuring `kubectl` context: + ```bash + zenml orchestrator register --flavor=kubernetes --kubernetes_context= + zenml stack register -o ... --set + ``` -2. **Using `kubectl` context**: - ```bash - zenml orchestrator register --flavor=kubernetes --kubernetes_context= - zenml stack register -o ... --set - ``` +**Running a Pipeline**: +- Execute the pipeline with: + ```bash + python your_pipeline.py + ``` +This command creates a Kubernetes pod for each pipeline step. Interaction with the pods can be done using `kubectl` commands. -#### Running a Pipeline -To run a ZenML pipeline with the Kubernetes Orchestrator: -```bash -python your_pipeline.py -``` -This command creates a Kubernetes pod for each pipeline step. Use `kubectl` commands to interact with the pods. For advanced configurations, refer to the [full Kubernetes Orchestrator documentation](../../component-guide/orchestrators/kubernetes.md). +For detailed configuration options, refer to the full Kubernetes Orchestrator documentation. ================================================== @@ -15101,12 +14883,12 @@ This command creates a Kubernetes pod for each pipeline step. Use `kubectl` comm # Minimal GCP Stack Setup Guide -This guide provides steps to quickly set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. +This guide outlines the steps to set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. ## Steps to Set Up ### 1. Choose a GCP Project -Select or create a Google Cloud project in the console. Ensure a billing account is attached. +Select or create a GCP project in the console. Ensure a billing account is attached. ```bash gcloud projects create --billing-project= @@ -15125,7 +14907,7 @@ Create a service account with the following roles: - AI Platform Service Agent - Storage Object Admin -### 4. Create a JSON Key for Your Service Account +### 4. Create a JSON Key for the Service Account Download the JSON key file for the service account. ```bash @@ -15151,10 +14933,7 @@ Create a GCS bucket and register it as an artifact store. ```bash export ARTIFACT_STORE_NAME=gcp_artifact_store - -zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp \ - --path=gs:// - +zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp --path=gs:// zenml artifact-store connect ${ARTIFACT_STORE_NAME} -i ``` @@ -15163,10 +14942,7 @@ Use Vertex AI as the orchestrator. ```bash export ORCHESTRATOR_NAME=gcp_vertex_orchestrator - -zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex \ - --project= --location=europe-west2 - +zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex --project= --location=europe-west2 zenml orchestrator connect ${ORCHESTRATOR_NAME} -i ``` @@ -15175,9 +14951,7 @@ Register a container registry. ```bash export CONTAINER_REGISTRY_NAME=gcp_container_registry - zenml container-registry register ${CONTAINER_REGISTRY_NAME} --flavor=gcp --uri= - zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i ``` @@ -15186,9 +14960,7 @@ Register the stack with the created components. ```bash export STACK_NAME=gcp_stack - -zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} \ - -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set +zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` ## Cleanup @@ -15199,26 +14971,26 @@ gcloud project delete ``` ## Best Practices -- **IAM and Least Privilege**: Grant minimum permissions necessary for ZenML operations. -- **Resource Labeling**: Use consistent labels for GCP resources for better management. - +- **Use IAM and Least Privilege Principle:** Grant minimum permissions necessary for ZenML. +- **Leverage GCP Resource Labeling:** Use labels for better resource management. + ```bash gcloud storage buckets update gs://your-bucket-name --update-labels=project=zenml,environment=production ``` -- **Cost Management**: Use GCP's cost management tools to monitor spending. +- **Implement Cost Management Strategies:** Monitor spending and set budget alerts. ```bash gcloud billing budgets create --billing-account=BILLING_ACCOUNT_ID --display-name="ZenML Monthly Budget" --budget-amount=1000 --threshold-rule=percent=90 ``` -- **Backup Strategy**: Regularly back up critical data and enable versioning on GCS buckets. +- **Implement a Robust Backup Strategy:** Regularly back up data and enable versioning. ```bash gsutil versioning set on gs://your-bucket-name ``` -By following these steps and best practices, you can efficiently set up and manage a GCP stack for ZenML projects. +By following these steps and best practices, you can effectively set up and manage a GCP stack for ZenML projects. ================================================== @@ -15229,32 +15001,30 @@ By following these steps and best practices, you can efficiently set up and mana This guide outlines the steps to create a minimal production stack on Azure for running ZenML pipelines. ## Prerequisites -- Active Azure account -- ZenML installed -- ZenML Azure integration: `zenml integration install azure` - -## Steps to Set Up Azure Stack +- Active Azure account. +- ZenML installed. +- ZenML Azure integration: `zenml integration install azure`. -### 1. Set Up Credentials -Create a service principal via Azure App Registrations: -1. Go to App Registrations in the Azure portal. -2. Click `+ New registration`, name it, and register. +## 1. Set Up Credentials +Create a service principal in Azure: +1. Go to Azure Portal > App Registrations > `+ New registration`. +2. Name your app and register it. 3. Note the Application ID and Tenant ID. -4. Under `Certificates & secrets`, create a client secret and note its value. +4. In `Certificates & secrets`, create a client secret and save the secret value. + +## 2. Create Resource Group and AzureML Instance +1. Go to Azure Portal > Resource Groups > `+ Create`. +2. After creation, go to the new resource group's overview and click `+ Create`. +3. Search for and select `Azure Machine Learning` to create an AzureML workspace. Consider creating a container registry as well. -### 2. Create Resource Group and AzureML Instance -1. Create a resource group in the Azure portal under `Resource Groups`. -2. In the new resource group, click `+ Create` and select `Azure Machine Learning` to create an AzureML workspace. Optionally, create a container registry. +## 3. Create Role Assignments +1. In your resource group, go to `Access control (IAM)` > `+ Add role assignment`. +2. Assign the following roles: `AzureML Compute Operator`, `AzureML Data Scientist`, and `AzureML Registry User`. +3. Select your registered app by its ID for each role. -### 3. Create Role Assignments -1. In your resource group, go to `Access control (IAM)` and add a new role assignment. -2. Assign the following roles to your registered app: - - AzureML Compute Operator - - AzureML Data Scientist - - AzureML Registry User +## 4. Create a Service Connector +Register a ZenML Azure Service Connector: -### 4. Create a Service Connector -Register a ZenML Azure Service Connector using the command: ```bash zenml service-connector register azure_connector --type azure \ --auth-method service-principal \ @@ -15263,18 +15033,20 @@ zenml service-connector register azure_connector --type azure \ --client_id= ``` -### 5. Create Stack Components -#### Artifact Store (Azure Blob Storage) +## 5. Create Stack Components +### Artifact Store (Azure Blob Storage) 1. Create a container in your AzureML workspace's storage account. 2. Register the artifact store: + ```bash zenml artifact-store register azure_artifact_store -f azure \ --path= \ --connector azure_connector ``` -#### Orchestrator (AzureML) +### Orchestrator (AzureML) Register the orchestrator: + ```bash zenml orchestrator register azure_orchestrator -f azureml \ --subscription_id= \ @@ -15283,17 +15055,19 @@ zenml orchestrator register azure_orchestrator -f azureml \ --connector azure_connector ``` -#### Container Registry (Azure Container Registry) +### Container Registry (Azure Container Registry) Register the container registry: + ```bash zenml container-registry register azure_container_registry -f azure \ --uri= \ --connector azure_connector ``` -### 6. Create a Stack -Create the Azure ZenML stack: -```bash +## 6. Create a Stack +Register the Azure ZenML stack: + +```shell zenml stack register azure_stack \ -o azure_orchestrator \ -a azure_artifact_store \ @@ -15301,8 +15075,9 @@ zenml stack register azure_stack \ --set ``` -### 7. Run a Pipeline -Define and run a simple ZenML pipeline: +## 7. Run a Pipeline +Define and run a ZenML pipeline: + ```python from zenml import pipeline, step @@ -15317,8 +15092,10 @@ def azure_pipeline(): if __name__ == "__main__": azure_pipeline() ``` + Save as `run.py` and execute: -```bash + +```shell python run.py ``` @@ -15331,21 +15108,21 @@ python run.py === File: docs/book/how-to/popular-integrations/kubeflow.md === -### Summary of Kubeflow Orchestrator Documentation +### Kubeflow Orchestrator Overview -The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow without needing to write Kubeflow code. +The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow without writing Kubeflow code. -#### Prerequisites +### Prerequisites To use the Kubeflow Orchestrator, ensure you have: - ZenML `kubeflow` integration installed: `zenml integration install kubeflow` - Docker installed and running -- `kubectl` installed (optional) +- (Optional) `kubectl` installed - A Kubernetes cluster with Kubeflow Pipelines - A remote artifact store and container registry in your ZenML stack -- A remote ZenML server deployed to the cloud +- A remote ZenML server deployed - (Optional) Kubernetes context name for the remote cluster -#### Configuring the Orchestrator +### Configuring the Orchestrator You can configure the orchestrator in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): @@ -15362,15 +15139,15 @@ You can configure the orchestrator in two ways: zenml stack update -o ``` -#### Running a Pipeline +### Running a Pipeline To run a ZenML pipeline using the Kubeflow Orchestrator: ```python python your_pipeline.py ``` This creates a Kubernetes pod for each pipeline step, viewable in the Kubeflow UI. -#### Additional Configuration -Further configuration can be done using `KubeflowOrchestratorSettings`: +### Additional Configuration +Further configure the orchestrator with `KubeflowOrchestratorSettings`: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings @@ -15386,12 +15163,12 @@ kubeflow_settings = KubeflowOrchestratorSettings( @pipeline(settings={"orchestrator": kubeflow_settings}) ``` -#### Multi-Tenancy Deployments +### Multi-Tenancy Deployments For multi-tenant setups, register the orchestrator with the `kubeflow_hostname`: ```bash zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` -Provide credentials in the orchestrator settings: +Provide namespace, username, and password in the settings: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="admin", @@ -15402,7 +15179,7 @@ kubeflow_settings = KubeflowOrchestratorSettings( @pipeline(settings={"orchestrator": kubeflow_settings}) ``` -For more details, refer to the [full Kubeflow Orchestrator documentation](../../component-guide/orchestrators/kubeflow.md). +For more details, refer to the full [Kubeflow Orchestrator documentation](../../component-guide/orchestrators/kubeflow.md). ================================================== @@ -15410,17 +15187,18 @@ For more details, refer to the [full Kubeflow Orchestrator documentation](../../ # AWS Stack Setup for ZenML Pipelines -This guide outlines the steps to create a minimal production stack on AWS for running ZenML pipelines, including setting up IAM roles, service connectors, and stack components. +## Overview +This guide provides steps to create a minimal production stack on AWS for running ZenML pipelines, including setting up IAM roles, service connectors, and stack components. ## Prerequisites - Active AWS account with permissions for S3, SageMaker, ECR, and ECS. - ZenML installed. -- AWS CLI configured with credentials. +- AWS CLI installed and configured. ## Steps ### 1. Set Up Credentials and Local Environment -1. **Choose AWS Region**: Select the region for deployment (e.g., `us-east-1`). +1. **Choose AWS Region**: Select the region for your ZenML stack (e.g., `us-east-1`). 2. **Create IAM Role**: - Get your AWS account ID: ```shell @@ -15452,7 +15230,7 @@ This guide outlines the steps to create a minimal production stack on AWS for ru aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess ``` -3. **Install ZenML Integrations**: +3. **Install ZenML AWS Integration**: ```shell zenml integration install aws s3 -y ``` @@ -15481,7 +15259,7 @@ zenml service-connector register aws_connector \ ``` #### Orchestrator (SageMaker Pipelines) -1. Create a SageMaker domain (follow AWS documentation). +1. Create a SageMaker domain (if not already created). 2. Register the SageMaker orchestrator: ```shell zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region= --execution_role= @@ -15497,14 +15275,16 @@ zenml service-connector register aws_connector \ zenml container-registry register ecr-registry --flavor=aws --uri=.dkr.ecr..amazonaws.com --connector aws_connector ``` -### 4. Create Stack +### 4. Create the ZenML Stack ```shell export STACK_NAME=aws_stack -zenml stack register ${STACK_NAME} -o sagemaker-orchestrator -a cloud_artifact_store -c ecr-registry --set + +zenml stack register ${STACK_NAME} -o sagemaker-orchestrator \ + -a cloud_artifact_store -c ecr-registry --set ``` ### 5. Run a Pipeline -Define and execute a simple ZenML pipeline: +Define and run a simple ZenML pipeline: ```python from zenml import pipeline, step @@ -15519,113 +15299,343 @@ def aws_sagemaker_pipeline(): if __name__ == "__main__": aws_sagemaker_pipeline() ``` -Run the pipeline: +Execute: ```shell python run.py ``` ## Cleanup -To delete resources: +To avoid charges, delete AWS resources: ```shell +# Delete S3 bucket aws s3 rm s3://your-bucket-name --recursive aws s3api delete-bucket --bucket your-bucket-name + +# Delete SageMaker domain aws sagemaker delete-domain --domain-id + +# Delete ECR repository aws ecr delete-repository --repository-name zenml --force + +# Detach policies from IAM role aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess + +# Delete IAM role aws iam delete-role --role-name zenml-role ``` -## Conclusion -This guide provides a streamlined process for setting up an AWS stack with ZenML, enabling scalable and efficient machine learning pipelines. Key steps include IAM role creation, service connector setup, and stack component registration. Follow best practices for security and cost management to optimize your AWS resources. +## Conclusion +This guide outlines the steps to set up an AWS stack for ZenML, enabling scalable and efficient machine learning pipelines. Key components include IAM roles, S3 for artifact storage, SageMaker for orchestration, and ECR for container management. Follow best practices for security and cost management to optimize your AWS stack usage. + +================================================== + +=== File: docs/book/how-to/popular-integrations/mlflow.md === + +### MLflow Experiment Tracker with ZenML + +The ZenML MLflow Experiment Tracker integration allows for logging and visualizing pipeline information using MLflow without additional code. + +#### Prerequisites +- Install ZenML MLflow integration: + ```bash + zenml integration install mlflow -y + ``` +- Set up an MLflow deployment (local or remote with proxied artifact storage). + +#### Configuring the Experiment Tracker +1. **Local Deployment**: + - Suitable for local ZenML runs; no extra configuration needed. + ```bash + zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow + zenml stack register custom_stack -e mlflow_experiment_tracker ... --set + ``` + +2. **Remote Deployment**: + - Requires authentication (ZenML secrets recommended). + ```bash + zenml secret create mlflow_secret --username= --password= + zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... + ``` + +#### Using the Experiment Tracker +To log information in a pipeline step: +1. Enable the experiment tracker with the `@step` decorator. +2. Use MLflow's logging capabilities. + ```python + import mlflow + + @step(experiment_tracker="") + def train_step(...): + mlflow.tensorflow.autolog() + mlflow.log_param(...) + mlflow.log_metric(...) + mlflow.log_artifact(...) + ``` + +#### Viewing Results +Retrieve the MLflow experiment URL for a ZenML run: +```python +last_run = client.get_pipeline("").last_run +tracking_url = last_run.get_step("").run_metadata["experiment_tracker_url"].value +``` + +#### Additional Configuration +Further configure the experiment tracker using `MLFlowExperimentTrackerSettings`: +```python +from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings + +mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) + +@step(experiment_tracker="", settings={"experiment_tracker": mlflow_settings}) +``` + +For more details, refer to the [full MLflow Experiment Tracker documentation](../../component-guide/experiment-trackers/mlflow.md). + +================================================== + +=== File: docs/book/how-to/popular-integrations/README.md === + +# ZenML Integrations Guide + +ZenML integrates seamlessly with popular tools in the data science and machine learning ecosystem. This guide provides instructions for integrating ZenML with these tools. + +**Key Points:** +- ZenML is designed for compatibility with various data science and machine learning tools. +- The integration process is straightforward, enhancing workflow efficiency. + +For specific integration examples and detailed steps, refer to the respective sections in the documentation. + +================================================== + +=== File: docs/book/how-to/trigger-pipelines/use-templates-python.md === + +### ZenML Template Creation and Execution + +**Feature Note:** This functionality is exclusive to [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. + +#### Create a Template + +To create a run template using the ZenML client, ensure you select a pipeline run executed on a remote stack: + +```python +from zenml.client import Client + +run = Client().get_pipeline_run() +Client().create_run_template(name=, deployment_id=run.deployment_id) +``` + +Alternatively, create a template directly from your pipeline definition while using a remote stack: + +```python +from zenml import pipeline + +@pipeline +def my_pipeline(): + ... + +template = my_pipeline.create_run_template(name=) +``` + +#### Run a Template + +To execute a template, retrieve it and trigger the pipeline: + +```python +from zenml.client import Client + +template = Client().get_run_template() +config = template.config_template + +# [OPTIONAL] Modify the config here + +Client().trigger_pipeline(template_id=template.id, run_configuration=config) +``` + +The new run will execute on the same stack as the original. + +#### Advanced Usage: Run a Template from Another Pipeline + +You can trigger one pipeline from another using the following structure: + +```python +import pandas as pd +from zenml import pipeline, step +from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact +from zenml.artifacts.utils import load_artifact +from zenml.client import Client +from zenml.config.pipeline_run_configuration import PipelineRunConfiguration + +@step +def trainer(data_artifact_id: str): + df = load_artifact(data_artifact_id) + +@pipeline +def training_pipeline(): + trainer() + +@step +def load_data() -> pd.DataFrame: + ... + +@step +def trigger_pipeline(df: UnmaterializedArtifact): + run_config = PipelineRunConfiguration( + steps={"trainer": {"parameters": {"data_artifact_id": df.id}}} + ) + Client().trigger_pipeline("training_pipeline", run_configuration=run_config) + +@pipeline +def loads_data_and_triggers_training(): + df = load_data() + trigger_pipeline(df) # Triggers the training pipeline +``` + +For more details, refer to the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) documentation, as well as information on Unmaterialized Artifacts [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). + +================================================== + +=== File: docs/book/how-to/trigger-pipelines/use-templates-cli.md === + +### ZenML CLI: Create a Run Template + +**Feature Access**: This feature is available only in ZenML Pro. [Sign up here](https://cloud.zenml.io) for access. + +**Command to Create a Template**: +Use the following command to create a run template with the ZenML CLI: + +```bash +zenml pipeline create-run-template --name= +``` +- ``: Should be `run.my_pipeline` if defined in `run.py`. + +**Requirements**: +- An active **remote stack** is required. You can specify one using the `--stack` option. + +================================================== + +=== File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md === + +### ZenML Dashboard: Creating and Running Templates + +**Feature Availability**: This functionality is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. + +#### Creating a Template +1. Navigate to a pipeline run executed on a remote stack (requires a remote orchestrator, artifact store, and container registry). +2. Click on `+ New Template`, provide a name, and click `Create`. + +#### Running a Template +- To run a template: + - Click `Run a Pipeline` on the main `Pipelines` page, or + - Go to a specific template page and select `Run Template`. + +You will be directed to the `Run Details` page, where you can: +- Upload a `.yaml` configuration file or +- Modify the configuration using the editor. + +Upon running the template, a new run will initiate on the same stack as the original run. + +================================================== + +=== File: docs/book/how-to/trigger-pipelines/README.md === + +### Trigger a Pipeline in ZenML + +ZenML allows you to trigger a pipeline using a simple function decorated with `@pipeline`. Below is an example of a basic pipeline that loads data and trains a model: + +```python +from zenml import step, pipeline + +@step +def load_data() -> dict: + return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} + +@step +def train_model(data: dict) -> None: + print(f"Trained model using {len(data['features'])} data points.") + +@pipeline +def simple_ml_pipeline(): + dataset = load_data() + train_model(dataset) + +if __name__ == "__main__": + simple_ml_pipeline() +``` + +### Run Templates -================================================== +**Run Templates** are pre-defined, parameterized configurations for executing ZenML pipelines. They can be customized and executed from the ZenML dashboard or via the Client/REST API. This feature is exclusive to ZenML Pro users. -=== File: docs/book/how-to/popular-integrations/mlflow.md === +For more details, refer to the following resources: +- Use templates: Python SDK +- Use templates: CLI +- Use templates: Dashboard +- Use templates: REST API -# MLflow Experiment Tracker with ZenML +![Working with Templates](../../../.gitbook/assets/run-templates.gif) -## Overview -The ZenML MLflow Experiment Tracker integration allows logging and visualization of pipeline step information using MLflow without additional coding. +This documentation provides a concise overview of triggering pipelines and utilizing Run Templates in ZenML. -## Prerequisites -- Install ZenML MLflow integration: - ```bash - zenml integration install mlflow -y - ``` -- An MLflow deployment (local or remote with proxied artifact storage). +================================================== -## Configuring the Experiment Tracker -### 1. Local Deployment -- Suitable for local ZenML runs; no extra configuration needed. - ```bash - zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow - zenml stack register custom_stack -e mlflow_experiment_tracker ... --set - ``` +=== File: docs/book/how-to/trigger-pipelines/use-templates-rest-api.md === -### 2. Remote Deployment -- Requires authentication configuration (recommended: ZenML secrets). - Create a secret: - ```bash - zenml secret create mlflow_secret --username= --password= - ``` - Register the experiment tracker: - ```bash - zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... - ``` +### ZenML REST API: Creating and Running a Template -## Using the Experiment Tracker -To log information in a pipeline step: -1. Enable the experiment tracker with the `@step` decorator. -2. Use MLflow's logging capabilities. - ```python - import mlflow +**Note:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. - @step(experiment_tracker="") - def train_step(...): - mlflow.tensorflow.autolog() - mlflow.log_param(...) - mlflow.log_metric(...) - mlflow.log_artifact(...) - ``` +### Triggering a Pipeline via REST API -## Viewing Results -Retrieve the MLflow experiment URL for a ZenML run: -```python -last_run = client.get_pipeline("").last_run -trainer_step = last_run.get_step("") -tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value -``` +To trigger a pipeline, ensure you have at least one run template created. Follow these steps: -## Additional Configuration -Further configure the experiment tracker using `MLFlowExperimentTrackerSettings`: -```python -from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings +1. **Get Pipeline ID:** + - Call: `GET /pipelines?name=` + - Response includes ``. -mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) +2. **Get Template ID:** + - Call: `GET /run_templates?pipeline_id=` + - Response includes ``. -@step( - experiment_tracker="", - settings={"experiment_tracker": mlflow_settings} -) -``` +3. **Run the Pipeline:** + - Call: `POST /run_templates//runs` with optional `PipelineRunConfiguration` in the body. -For more details, refer to the full [MLflow Experiment Tracker documentation](../../component-guide/experiment-trackers/mlflow.md). +### Example Workflow -================================================== +To re-run a pipeline named `training`, execute the following commands: -=== File: docs/book/how-to/popular-integrations/README.md === +1. **Retrieve Pipeline ID:** + ```shell + curl -X 'GET' \ + '/api/v1/pipelines?hydrate=false&name=training' \ + -H 'accept: application/json' \ + -H 'Authorization: Bearer ' + ``` -# ZenML Integrations Guide +2. **Retrieve Template ID:** + ```shell + curl -X 'GET' \ + '/api/v1/run_templates?hydrate=false&pipeline_id=' \ + -H 'accept: application/json' \ + -H 'Authorization: Bearer ' + ``` -ZenML integrates seamlessly with popular tools in the data science and machine learning ecosystem. This documentation provides guidance on how to set up these integrations. +3. **Trigger the Pipeline:** + ```shell + curl -X 'POST' \ + '/api/v1/run_templates//runs' \ + -H 'accept: application/json' \ + -H 'Content-Type: application/json' \ + -H 'Authorization: Bearer ' \ + -d '{ + "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}} + }' + ``` -### Key Points: -- ZenML is designed for compatibility with various tools used in data science and machine learning. -- The guide includes step-by-step instructions for integrating ZenML with these tools. +A successful response indicates the pipeline has been re-triggered with the specified configuration. -For further details, refer to the specific integration sections within the documentation. +For details on obtaining a bearer token, refer to the [API reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). ================================================== @@ -15633,29 +15643,47 @@ For further details, refer to the specific integration sections within the docum # Infrastructure and Deployment Summary -This section details the infrastructure setup and deployment processes in ZenML. Key points include: +This section details the infrastructure setup and deployment processes for ZenML. Key points include: -1. **Infrastructure Requirements**: ZenML requires a cloud provider or on-premises setup. Supported providers include AWS, GCP, and Azure. +1. **Infrastructure Requirements**: + - ZenML can be deployed on various cloud platforms (AWS, GCP, Azure) or on-premises. + - Ensure the environment meets the necessary hardware and software specifications. -2. **Deployment Options**: Users can deploy ZenML pipelines in various environments, such as local, cloud, or hybrid setups. +2. **Deployment Options**: + - **Managed Services**: Utilize cloud providers' managed services for ease of use and scalability. + - **Self-Hosted**: Deploy ZenML on your own servers for greater control and customization. -3. **Configuration**: Configuration is managed through a `zenml.yaml` file, which specifies pipeline components, storage, and orchestration settings. +3. **Configuration**: + - Define configuration files (e.g., `zenml.yaml`) to specify pipeline components, storage, and execution environments. + - Use environment variables for sensitive information (e.g., API keys). -4. **Installation**: ZenML can be installed via pip: - ```bash - pip install zenml - ``` +4. **Installation**: + - Install ZenML via pip: + ```bash + pip install zenml + ``` + - Initialize a new ZenML repository: + ```bash + zenml init + ``` -5. **Setting Up a Stack**: A stack in ZenML consists of a version control system, artifact store, and orchestrator. Use the following command to create a stack: - ```bash - zenml stack create --artifact-store --orchestrator - ``` +5. **Pipeline Deployment**: + - Create pipelines using decorators and define steps with functions. + - Example pipeline setup: + ```python + from zenml.pipelines import pipeline -6. **Connecting to Cloud Services**: Users must authenticate and configure cloud services, typically through environment variables or configuration files. + @pipeline + def my_pipeline(): + # Define pipeline steps here + pass + ``` -7. **Deployment Best Practices**: It is recommended to use version control for pipeline code and maintain consistent environments for reproducibility. +6. **Monitoring and Maintenance**: + - Implement logging and monitoring to track pipeline performance. + - Regularly update ZenML to benefit from new features and security patches. -This summary encapsulates the critical aspects of infrastructure and deployment in ZenML, ensuring that essential information is retained for further inquiries. +This summary encapsulates the essential elements of infrastructure and deployment in ZenML, ensuring clarity and focus on critical technical details. ================================================== @@ -15663,15 +15691,15 @@ This summary encapsulates the critical aspects of infrastructure and deployment ### Summary: Registering Existing Infrastructure with ZenML - A Guide for Terraform Users -#### Overview -This guide assists advanced users in integrating ZenML with their existing Terraform setups, focusing on managing custom Terraform code while utilizing the ZenML provider. +This guide helps advanced users integrate ZenML with existing Terraform setups, focusing on managing custom Terraform code using the ZenML provider. #### Two-Phase Approach -1. **Infrastructure Deployment**: Creating cloud resources (managed by platform teams). -2. **ZenML Registration**: Registering these resources as ZenML stack components. +1. **Infrastructure Deployment**: Create cloud resources (handled by platform teams). +2. **ZenML Registration**: Register these resources as ZenML stack components. #### Phase 1: Infrastructure Deployment -Example of existing GCP infrastructure: +You may already have existing Terraform configurations, such as: + ```hcl resource "google_storage_bucket" "ml_artifacts" { name = "company-ml-artifacts" @@ -15685,25 +15713,34 @@ resource "google_artifact_registry_repository" "ml_containers" { ``` #### Phase 2: ZenML Registration + **Setup the ZenML Provider**: +Configure the ZenML provider to connect with your ZenML server: + ```hcl terraform { required_providers { - zenml = { source = "zenml-io/zenml" } + zenml = { + source = "zenml-io/zenml" + } } } provider "zenml" { - # Configuration from environment variables: - # ZENML_SERVER_URL, ZENML_API_KEY + # Configuration options from environment variables: + # ZENML_SERVER_URL + # ZENML_API_KEY } ``` -Generate API key: + +**Generate API Key**: ```bash zenml service-account create ``` **Create Service Connectors**: +Service connectors manage authentication: + ```hcl resource "zenml_service_connector" "gcp_connector" { name = "gcp-${var.environment}-connector" @@ -15730,6 +15767,8 @@ resource "zenml_stack_component" "artifact_store" { ``` **Register Stack Components**: +Register various component types: + ```hcl locals { component_configs = { @@ -15751,20 +15790,25 @@ resource "zenml_stack_component" "components" { ``` **Assemble the Stack**: +Combine components into a stack: + ```hcl resource "zenml_stack" "ml_stack" { name = "${var.environment}-ml-stack" - components = { for k, v in zenml_stack_component.components : k => v.id } + components = { + for k, v in zenml_stack_component.components : k => v.id + } } ``` -#### Practical Walkthrough: Registering Existing GCP Infrastructure +### Practical Example: Registering Existing GCP Infrastructure + **Prerequisites**: - GCS bucket for artifacts - Artifact Registry repository - Service account for ML operations -- Vertex AI enabled +- Vertex AI enabled for orchestration **Variables Configuration**: ```hcl @@ -15785,8 +15829,15 @@ terraform { } } -provider "zenml" { server_url = var.zenml_server_url; api_key = var.zenml_api_key } -provider "google" { project = var.project_id; region = var.region } +provider "zenml" { + server_url = var.zenml_server_url + api_key = var.zenml_api_key +} + +provider "google" { + project = var.project_id + region = var.region +} resource "google_storage_bucket" "artifacts" { name = "${var.project_id}-zenml-artifacts-${var.environment}" @@ -15803,7 +15854,11 @@ resource "zenml_service_connector" "gcp" { name = "gcp-${var.environment}" type = "gcp" auth_method = "service-account" - configuration = { project_id = var.project_id; region = var.region; service_account_json = var.gcp_service_account_key } + configuration = { + project_id = var.project_id + region = var.region + service_account_json = var.gcp_service_account_key + } } resource "zenml_stack_component" "artifact_store" { @@ -15826,10 +15881,21 @@ resource "zenml_stack" "gcp_stack" { **Outputs Configuration**: ```hcl -output "stack_id" { value = zenml_stack.gcp_stack.id } -output "stack_name" { value = zenml_stack.gcp_stack.name } -output "artifact_store_path" { value = "${google_storage_bucket.artifacts.name}/artifacts" } -output "container_registry_uri" { value = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.containers.repository_id}" } +output "stack_id" { + value = zenml_stack.gcp_stack.id +} + +output "stack_name" { + value = zenml_stack.gcp_stack.name +} + +output "artifact_store_path" { + value = "${google_storage_bucket.artifacts.name}/artifacts" +} + +output "container_registry_uri" { + value = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.containers.repository_id}" +} ``` **terraform.tfvars Configuration**: @@ -15839,13 +15905,15 @@ project_id = "your-gcp-project-id" region = "us-central1" environment = "dev" ``` + +**Sensitive Variables**: Store sensitive variables in environment variables: ```bash export TF_VAR_zenml_api_key="your-zenml-api-key" export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) ``` -#### Usage Instructions +### Usage Instructions 1. Initialize Terraform: ```bash terraform init @@ -15858,27 +15926,27 @@ export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) ```bash terraform plan ``` -4. Apply configuration: +4. Apply the configuration: ```bash terraform apply ``` -5. Set the new stack as active: +5. Set the newly created stack as active: ```bash zenml stack set $(terraform output -raw stack_name) ``` -6. Verify configuration: +6. Verify the configuration: ```bash zenml stack describe ``` -#### Best Practices +### Best Practices - Use appropriate IAM roles and permissions. -- Follow security practices for credential management. -- Consider Terraform workspaces for multiple environments. +- Follow security practices for handling credentials. +- Consider using Terraform workspaces for multiple environments. - Regularly back up Terraform state files. - Version control Terraform configurations (excluding sensitive files). -For more information, refer to the [ZenML provider documentation](https://registry.terraform.io/providers/zenml-io/zenml/latest). +For more information on the ZenML Terraform provider, visit the [ZenML provider documentation](https://registry.terraform.io/providers/zenml-io/zenml/latest). ================================================== @@ -15887,27 +15955,21 @@ For more information, refer to the [ZenML provider documentation](https://regist # Summary: Best Practices for Using IaC with ZenML ## Overview -This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform. It addresses challenges such as supporting multiple teams, maintaining security, and ensuring quick iterations. +This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform. It addresses challenges such as supporting multiple teams, maintaining security, and enabling rapid iteration. ## ZenML Approach -ZenML uses stack components as abstractions over infrastructure resources, promoting a component-based architecture for reusability and consistency. +ZenML uses stack components as abstractions over infrastructure resources, facilitating a component-based architecture for reusability and consistency. ### Part 1: Stack Component Architecture -- **Problem**: Different teams require varied ML infrastructure configurations. +- **Problem**: Different teams require varied infrastructure configurations. - **Solution**: Create reusable modules that correspond to ZenML stack components. -**Base Infrastructure Example:** +**Base Infrastructure Example**: ```hcl -# modules/zenml_stack_base/main.tf -terraform { - required_providers { - zenml = { source = "zenml-io/zenml" } - google = { source = "hashicorp/google" } - } +resource "random_id" "suffix" { + byte_length = 6 } -resource "random_id" "suffix" { byte_length = 6 } - module "base_infrastructure" { source = "./modules/base_infra" environment = var.environment @@ -15917,8 +15979,8 @@ module "base_infrastructure" { } resource "zenml_service_connector" "base_connector" { - name = "${var.environment}-base-connector" - type = "gcp" + name = "${var.environment}-base-connector" + type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id @@ -15926,44 +15988,42 @@ resource "zenml_service_connector" "base_connector" { service_account_json = module.base_infrastructure.service_account_key } } - -resource "zenml_stack" "base_stack" { - name = "${var.environment}-base-stack" - components = { - artifact_store = zenml_stack_component.artifact_store.id - container_registry = zenml_stack_component.container_registry.id - orchestrator = zenml_stack_component.orchestrator.id - } -} ``` Teams can extend this base stack with specific components. ### Part 2: Environment Management and Authentication -- **Problem**: Different environments require unique configurations and authentication methods. -- **Solution**: Use a flexible service connector setup that adapts to the environment. +- **Problem**: Different environments require tailored authentication and configurations. +- **Solution**: Use a flexible service connector setup that adapts to each environment. -**Environment-Specific Connector Example:** +**Environment-Specific Connector Example**: ```hcl locals { env_config = { - dev = { machine_type = "n1-standard-4", gpu_enabled = false, auth_method = "service-account" } - prod = { machine_type = "n1-standard-8", gpu_enabled = true, auth_method = "external-account" } + dev = { machine_type = "n1-standard-4", gpu_enabled = false, auth_method = "service-account", auth_configuration = { service_account_json = file("dev-sa.json") } } + prod = { machine_type = "n1-standard-8", gpu_enabled = true, auth_method = "external-account", auth_configuration = { external_account_json = file("prod-sa.json") } } } } resource "zenml_service_connector" "env_connector" { - name = "${var.environment}-connector" - type = "gcp" + name = "${var.environment}-connector" + type = "gcp" auth_method = local.env_config[var.environment].auth_method + dynamic "configuration" { + for_each = try(local.env_config[var.environment].auth_configuration, {}) + content { + key = configuration.key + value = configuration.value + } + } } ``` ### Part 3: Resource Sharing and Isolation -- **Problem**: Need for strict isolation of resources across ML projects. +- **Problem**: Ensuring data isolation and compliance across ML projects. - **Solution**: Implement resource sharing with project isolation. -**Project Isolation Example:** +**Project Isolation Example**: ```hcl locals { project_paths = { fraud_detection = "projects/fraud_detection/${var.environment}", recommendation = "projects/recommendation/${var.environment}" } @@ -15971,38 +16031,36 @@ locals { resource "zenml_stack_component" "project_artifact_stores" { for_each = local.project_paths - name = "${each.key}-artifact-store" + name = "${each.key}-artifact-store" + type = "artifact_store" configuration = { path = "gs://${var.shared_bucket}/${each.value}" } } ``` ### Part 4: Advanced Stack Management Practices -1. **Stack Component Versioning** +1. **Stack Component Versioning**: ```hcl locals { stack_version = "1.2.0" } resource "zenml_stack" "versioned_stack" { name = "stack-v${local.stack_version}" } ``` -2. **Service Connector Management** +2. **Service Connector Management**: ```hcl resource "zenml_service_connector" "env_connector" { - name = "${var.environment}-${var.purpose}-connector" + name = "${var.environment}-${var.purpose}-connector" auth_method = var.environment == "prod" ? "workload-identity" : "service-account" } ``` -3. **Component Configuration Management** +3. **Component Configuration Management**: ```hcl locals { base_configs = { orchestrator = { location = var.region, project = var.project_id } } - } - resource "zenml_stack_component" "configured_component" { - name = "${var.environment}-${var.component_type}" - configuration = merge(local.base_configs[var.component_type], try(local.env_configs[var.environment][var.component_type], {})) + env_configs = { dev = { orchestrator = { machine_type = "n1-standard-4" } }, prod = { orchestrator = { machine_type = "n1-standard-8" } } } } ``` -4. **Stack Organization and Dependencies** +4. **Stack Organization and Dependencies**: ```hcl module "ml_stack" { source = "./modules/ml_stack" @@ -16010,7 +16068,7 @@ resource "zenml_stack_component" "project_artifact_stores" { } ``` -5. **State Management** +5. **State Management**: ```hcl terraform { backend "gcs" { prefix = "terraform/state" } @@ -16018,7 +16076,7 @@ resource "zenml_stack_component" "project_artifact_stores" { ``` ## Conclusion -Using ZenML and Terraform enables the creation of flexible, maintainable, and secure ML infrastructure. Following best practices ensures a clean and scalable codebase while facilitating efficient management of ML operations. +Utilizing ZenML and Terraform allows for the creation of a flexible, maintainable, and secure ML infrastructure. The ZenML provider streamlines the process while adhering to infrastructure-as-code best practices. Key recommendations include maintaining DRY configurations, consistent naming, and proper state management. ================================================== @@ -16026,9 +16084,9 @@ Using ZenML and Terraform enables the creation of flexible, maintainable, and se ### Integrate with Infrastructure as Code -**Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure through code rather than manual processes. This section covers integrating ZenML with popular IaC tools, specifically **Terraform**. +**Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure through code rather than manual processes. This section covers how to integrate ZenML with popular IaC tools, specifically **Terraform**. -For more details, refer to the [Terraform documentation](https://www.terraform.io/). +For more information on IaC, visit [AWS: What is IaC](https://aws.amazon.com/what-is/iac). ![ZenML stack on Terraform Registry](../../../.gitbook/assets/terraform_providers_screenshot.png) @@ -16036,28 +16094,33 @@ For more details, refer to the [Terraform documentation](https://www.terraform.i === File: docs/book/how-to/infrastructure-deployment/auth-management/azure-service-connector.md === -### Azure Service Connector Documentation Summary +### Summary of Azure Service Connector Documentation -**Overview**: The ZenML Azure Service Connector enables authentication and access to Azure resources such as Blob storage, AKS clusters, and ACR registries. It supports automatic credential configuration via the Azure CLI and specialized authentication for various Azure services. +#### Overview +The ZenML Azure Service Connector enables authentication and access to Azure resources like Blob storage, AKS clusters, and ACR registries. It supports automatic configuration and credential detection via the Azure CLI. -#### Key Features: +#### Key Features - **Resource Types**: - **Generic Azure Resource**: Connects to any Azure service using generic credentials. - - **Azure Blob Storage**: Requires specific IAM permissions (e.g., `Storage Blob Data Contributor`). Supports URIs like `{az|abfs}://{container-name}`. - - **AKS Kubernetes Cluster**: Requires permissions to list AKS clusters. Identified by `[{resource-group}/]{cluster-name}`. - - **ACR Container Registry**: Requires permissions to pull/push images. Identified by `[https://]{registry-name}.azurecr.io`. + - **Azure Blob Storage**: Requires IAM permissions for reading/writing blobs and listing accounts/containers. + - URI formats: `{az|abfs}://{container-name}` or `{container-name}`. + - Only supports service principal authentication. + - **AKS Kubernetes Cluster**: Requires permissions to list AKS clusters. + - URI formats: `[{resource-group}/]{cluster-name}` or `{cluster-name}`. + - **ACR Container Registry**: Requires permissions to pull/push images and list registries. + - URI formats: `[https://]{registry-name}.azurecr.io` or `{registry-name}`. -#### Authentication Methods: -1. **Implicit Authentication**: Uses environment variables or Azure CLI credentials. Disabled by default for security reasons. -2. **Service Principal**: Uses Azure client ID and secret for authentication. Recommended for production use. -3. **Access Token**: Temporary tokens that must be refreshed regularly. Not suitable for Azure Blob storage. +#### Authentication Methods +1. **Implicit Authentication**: Uses environment variables or Azure CLI credentials. Disabled by default due to security risks. +2. **Service Principal**: Requires a client ID and secret for authentication. Recommended for production use. +3. **Access Token**: Temporary tokens that require regular updates. Not suitable for Azure Blob storage. -#### Configuration Commands: +#### Configuration Commands - **List Connector Types**: ```shell zenml service-connector list-types --type azure ``` -- **Register Service Connector**: +- **Register Service Connector** (Service Principal): ```shell zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` @@ -16066,45 +16129,30 @@ For more details, refer to the [Terraform documentation](https://www.terraform.i zenml service-connector describe ``` -#### Local Client Provisioning: -- Local Azure CLI, Kubernetes `kubectl`, and Docker CLI can be configured with credentials from the Azure Service Connector. -- Example to configure Kubernetes CLI: +#### Local Client Provisioning +- Configure local Docker and Kubernetes CLIs using credentials from the Azure Service Connector. +- Example for Kubernetes: ```shell - zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id= + zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id= ``` -#### Stack Components: -- Azure Service Connector can connect various Stack Components, including: - - **Azure Artifact Store**: Connects to Azure Blob storage. - - **Kubernetes Orchestrator**: Manages workloads on AKS. - - **Container Registry**: Connects to ACR for image management. +#### Stack Components Usage +- Connect Azure Blob Storage, AKS, and ACR to ZenML Stack Components without manual credential management. +- Example of connecting an Azure Blob Storage Artifact Store: + ```shell + zenml artifact-store register azure-demo --flavor azure --path=az://demo-zenmlartifactstore + ``` -#### End-to-End Example: +#### End-to-End Example 1. **Install Azure Integration**: ```shell zenml integration install -y azure ``` -2. **Register Service Connector**: - ```shell - zenml service-connector register azure-service-principal ... - ``` -3. **Register and Connect Stack Components**: - - **Artifact Store**: - ```shell - zenml artifact-store register azure-demo --flavor azure --path=az://demo-zenmlartifactstore - ``` - - **Orchestrator**: - ```shell - zenml orchestrator register aks-demo-cluster ... - ``` - - **Container Registry**: - ```shell - zenml container-registry register acr-demo-registry --uri=demozenmlcontainerregistry.azurecr.io - ``` -4. **Run a Pipeline**: - - Example pipeline code provided for execution. +2. **Register Service Connector** with service principal. +3. **Connect Azure Blob Storage** and **Kubernetes Orchestrator** to the registered connector. +4. **Run a Simple Pipeline** to validate the setup. -This summary encapsulates the essential technical details and commands for configuring and using the Azure Service Connector with ZenML, ensuring that critical information is retained while maintaining conciseness. +This documentation provides a comprehensive guide to configuring and using the Azure Service Connector with ZenML, detailing resource types, authentication methods, and practical commands for setup and usage. ================================================== @@ -16112,80 +16160,56 @@ This summary encapsulates the essential technical details and commands for confi # Service Connectors Guide Summary -This documentation provides a comprehensive guide for managing Service Connectors in ZenML, enabling connections to external resources. Key sections include: +This documentation provides a comprehensive guide on managing Service Connectors to connect ZenML with external resources. Key sections include: ## Overview -- **Terminology**: Understand essential terms related to Service Connectors, including types, resource types, and resource names. -- **Service Connector Types**: Different implementations exist, such as AWS, GCP, Azure, Kubernetes, and Docker connectors, each supporting various authentication methods and resource types. +- **Terminology**: Familiarize yourself with terms related to Service Connectors, including Service Connector Types, Resource Types, and Resource Names. +- **Service Connector Types**: Different implementations for connecting to various resources (e.g., AWS, GCP, Azure). Use commands like `zenml service-connector list-types` to explore available types. -### Key Commands -- **List Service Connector Types**: - ```sh - zenml service-connector list-types - ``` -- **Describe a Service Connector Type**: - ```sh - zenml service-connector describe-type - ``` +### Example Commands +```sh +zenml service-connector list-types +zenml service-connector describe-type aws +``` -### Resource Types -- Resource Types categorize resources based on access methods or vendors (e.g., `kubernetes-cluster`, `docker-registry`). -- Use the command to list types for specific resources: - ```sh - zenml service-connector list-types --resource-type - ``` +## Resource Types +- Organizes resources into classes based on access protocols or vendors (e.g., `kubernetes-cluster`, `docker-registry`). +- Use `zenml service-connector list-types --resource-type kubernetes-cluster` to find applicable Service Connector Types. -### Service Connector Registration -- **Register a Service Connector**: - ```sh - zenml service-connector register --type --auto-configure - ``` -- Supports multi-type (multiple resource types) and multi-instance (multiple resources of the same type) configurations. +## Service Connectors +- Configure ZenML to authenticate and connect to external resources, storing necessary credentials. +- Can be **multi-type** (access multiple resource types) or **single-instance** (access one resource). -### Verification -- Verify the configuration and credentials of Service Connectors: - ```sh - zenml service-connector verify - ``` +### Example Command for Multi-Type Connector +```sh +zenml service-connector register aws-multi-type --type aws --auto-configure +``` -### Connecting Stack Components -- Connect Stack Components to resources using registered Service Connectors: - ```sh - zenml artifact-store connect --connector - ``` +## Security Practices +- Best practices for authentication methods are outlined, emphasizing the importance of using temporary credentials for production environments. -### Auto-Configuration -- Automatically extract configuration from local environments using: - ```sh - zenml service-connector register --type --auto-configure - ``` +## Connecting Stack Components +- Connect Stack Components to external resources using registered Service Connectors. +- Use interactive CLI mode for ease of connection. -### Local Client Configuration -- Configure local CLI tools (e.g., Docker, Kubernetes) with credentials from Service Connectors: - ```sh - zenml service-connector login --resource-type --resource-id - ``` +### Example Command +```sh +zenml artifact-store connect -i +``` -### Resource Discovery -- Discover available resources through Service Connectors: - ```sh - zenml service-connector list-resources - ``` +## Verification and Discovery +- Verify Service Connector configurations and credentials using `zenml service-connector verify `. +- Discover available resources with `zenml service-connector list-resources`. -### Example Commands -- Register multi-type AWS Service Connector: - ```sh - zenml service-connector register aws-multi-type --type aws --auto-configure - ``` -- Verify a specific resource: - ```sh - zenml service-connector verify --resource-type --resource-id - ``` +## End-to-End Examples +- Detailed examples are provided for AWS, GCP, and Azure Service Connectors to illustrate the complete process from registration to pipeline execution. -### End-to-End Examples -For complete workflows, refer to specific Service Connector examples for AWS, GCP, and Azure. +### Example Command for Resource Discovery +```sh +zenml service-connector list-resources --resource-type s3-bucket +``` -This guide serves as a foundational resource for effectively utilizing Service Connectors within ZenML, ensuring secure and efficient connections to external resources. +This summary encapsulates the essential information and commands needed to effectively manage Service Connectors in ZenML, ensuring users can connect to and utilize external resources efficiently. ================================================== @@ -16193,20 +16217,20 @@ This guide serves as a foundational resource for effectively utilizing Service C ### Kubernetes Service Connector Overview -The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to any generic cluster through pre-authenticated Kubernetes Python clients and local `kubectl` configuration. +The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to generic clusters via pre-authenticated Kubernetes Python clients and local `kubectl` configuration. ### Prerequisites -- Install the Kubernetes Service Connector: - - For standalone installation: +- Install the connector: + - For only the Kubernetes Service Connector: ```shell pip install "zenml[connectors-kubernetes]" ``` - - For full integration: + - For the entire Kubernetes ZenML integration: ```shell zenml integration install kubernetes ``` -- Local `kubectl` configuration is not mandatory for accessing clusters. +- Local `kubectl` configuration is not required for accessing Kubernetes clusters. ### Resource Types @@ -16215,13 +16239,13 @@ The ZenML Kubernetes Service Connector enables authentication and connection to ### Authentication Methods 1. Username and password (not recommended for production). -2. Authentication token (can be used with empty token for local K3D clusters). +2. Authentication token (with or without client certificates). For local K3D clusters, an empty token can be used. -**Warning:** Credentials are directly distributed to clients; use API tokens with client certificates when possible. +**Warning:** The Service Connector does not generate short-lived credentials; configured credentials are directly used for authentication. Use API tokens with client certificates when possible. ### Auto-configuration -Fetch credentials from the local `kubectl` during registration: +Fetch credentials from the local `kubectl` configuration during registration. Example command to register a service connector with auto-configuration: ```sh zenml service-connector register kube-auto --type kubernetes --auto-configure @@ -16229,7 +16253,7 @@ zenml service-connector register kube-auto --type kubernetes --auto-configure **Example Output:** ``` -Successfully registered service connector `kube-auto` with access to the following resources: +Successfully registered service connector `kube-auto` with access to: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE │ RESOURCE NAMES ┃ ┠───────────────────────┼────────────────┨ @@ -16239,43 +16263,42 @@ Successfully registered service connector `kube-auto` with access to the followi ### Describe Service Connector -To view details of the registered service connector: +To view details of a service connector: ```sh -zenml service-connector describe kube-auto +zenml service-connector describe kube-auto ``` **Example Output:** ``` -Service connector 'kube-auto' of type 'kubernetes' with id '4315e8eb-fcbd-4938-a4d7-a9218ab372a1' is owned by user 'default' and is 'private'. +Service connector 'kube-auto' of type 'kubernetes' with ID '4315e8eb...' is owned by user 'default' and is 'private'. ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY │ VALUE ┃ ┠──────────────────┼──────────────────────────────────────┨ -┃ ID │ 4315e8eb-fcbd-4938-a4d7-a9218ab372a1 ┃ +┃ ID │ 4315e8eb... ┃ ┃ NAME │ kube-auto ┃ ┃ AUTH METHOD │ token ┃ -┃ RESOURCE TYPES │ 🌀 kubernetes-cluster ┃ ┃ RESOURCE NAME │ 35.175.95.223 ┃ -┃ OWNER │ default ┃ +┃ CREATED_AT │ 2023-05-16 21:45:33.224740 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ``` ### Local Client Provisioning -Configure the local Kubernetes client using: +To configure the local Kubernetes client with credentials: ```sh -zenml service-connector login kube-auto +zenml service-connector login kube-auto ``` **Example Output:** ``` -Updated local kubeconfig with the cluster details. The current kubectl context was set to '35.185.95.223'. +Updated local kubeconfig with the cluster details. Current context set to '35.185.95.223'. ``` ### Stack Components Usage -The Kubernetes Service Connector can be utilized in Orchestrator and Model Deployer stack components, managing Kubernetes workloads without needing explicit `kubectl` configurations in the target environment. +The Kubernetes Service Connector can be utilized in Orchestrator and Model Deployer stack components, managing Kubernetes workloads without needing explicit `kubectl` configurations in the environment. ================================================== @@ -16283,36 +16306,37 @@ The Kubernetes Service Connector can be utilized in Orchestrator and Model Deplo ### HyperAI Service Connector Overview -The ZenML HyperAI Service Connector facilitates authentication with HyperAI instances for deploying pipeline runs. It provides pre-authenticated Paramiko SSH clients to linked Stack Components. +The ZenML HyperAI Service Connector enables authentication with HyperAI instances for deploying pipeline runs. It provides pre-authenticated Paramiko SSH clients to linked Stack Components. #### Command to List Connector Types ```shell $ zenml service-connector list-types --type hyperai ``` -#### Supported Resource Types and Authentication Methods -The connector supports HyperAI instances and offers the following authentication methods: -- RSA key -- DSA (DSS) key -- ECDSA key -- ED25519 key +#### Supported Resource Types +- **HyperAI instances** + +#### Authentication Methods +ZenML establishes an SSH connection to HyperAI instances, supporting the following authentication methods: +1. RSA key +2. DSA key +3. ECDSA key +4. ED25519 key -**Warning:** SSH private keys used in the connector will be shared across all clients running pipelines, granting unrestricted access to HyperAI instances. +**Important Note:** SSH private keys are distributed to all clients running pipelines, granting unrestricted access to HyperAI instances. #### Configuration Requirements -To configure the Service Connector, provide: -- At least one `hostname` -- `username` for login -- Optionally, an `ssh_passphrase` +- At least one `hostname` and a `username` are required for configuration. +- An optional `ssh_passphrase` can be provided. -You can either: -1. Create separate connectors for each HyperAI instance with different SSH keys. -2. Use a single SSH key for multiple instances and select the instance when creating the HyperAI orchestrator component. +**Usage Options:** +1. Create separate service connectors for each HyperAI instance with different SSH keys. +2. Use a single SSH key for multiple instances, selecting the instance during HyperAI orchestrator component creation. #### Auto-Configuration -The Service Connector does not support auto-discovery of authentication credentials from HyperAI instances. Feedback on this feature is welcome via [Slack](https://zenml.io/slack) or GitHub. +This Service Connector does not support auto-discovery of authentication credentials. Feedback for this feature can be provided via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues). -#### Stack Components Usage +#### Stack Component Usage The HyperAI Service Connector is utilized by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances. ================================================== @@ -16321,9 +16345,9 @@ The HyperAI Service Connector is utilized by the HyperAI Orchestrator to deploy ### Docker Service Connector Overview -The ZenML Docker Service Connector enables authentication with Docker or OCI container registries and manages Docker clients for these registries. It provides pre-authenticated `python-docker` clients to linked Stack Components. +The ZenML Docker Service Connector enables authentication with Docker or OCI container registries and manages Docker clients. It provides pre-authenticated `python-docker` clients for linked Stack Components. -#### Command to List Docker Service Connector Types +#### Command to List Connector Types ```shell zenml service-connector list-types --type docker ``` @@ -16332,27 +16356,21 @@ zenml service-connector list-types --type docker - **Resource Type**: `docker-registry` - **Registry Formats**: - DockerHub: `docker.io` or `https://index.docker.io/v1/` - - Generic OCI: `https://host:port/` + - OCI registry: `https://host:port/` #### Authentication Methods -Authentication can be done using a username and password or access token, with a preference for API tokens over passwords. +Authentication is performed using a username and password or access token. API tokens are recommended over passwords. #### Registering a Docker Service Connector ```sh zenml service-connector register dockerhub --type docker -in ``` -**Example Command Output:** +**Example Command Output**: ```text Please enter a name for the service connector [dockerhub]: -Please enter a description for the service connector []: -Please select a service connector type (docker) [docker]: -Only one resource type is available for this connector (docker-registry). -Only one authentication method is available for this connector (password). Would you like to use it? [Y/n]: -[username] Username {string, secret, required}: -[password] Password {string, secret, required}: -[registry] Registry server URL. Omit to use DockerHub. {string, optional}: -Successfully registered service connector `dockerhub` with access to the following resources: +... +Successfully registered service connector `dockerhub` with access to: ┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE │ RESOURCE NAMES ┃ ┠────────────────────┼────────────────┨ @@ -16360,10 +16378,10 @@ Successfully registered service connector `dockerhub` with access to the followi ┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` -**Note**: Credentials are directly used for authentication and are not short-lived. +**Note**: Credentials configured in the Service Connector are distributed directly to clients and not short-lived. -#### Auto-Configuration -The connector does not support auto-discovery of credentials from local Docker clients. Feedback can be provided via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues). +#### Auto-configuration +The Service Connector does not support auto-discovery of authentication credentials from local Docker clients. Feedback can be provided via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues). #### Local Client Provisioning To configure the local Docker client with credentials: @@ -16371,18 +16389,17 @@ To configure the local Docker client with credentials: zenml service-connector login dockerhub ``` -**Example Command Output:** +**Example Command Output**: ```text -Attempting to configure local client using service connector 'dockerhub'... -WARNING! Your password will be stored unencrypted in /home/user/.docker/config.json. -Configure a credential helper to remove this warning. -The 'dockerhub' Docker Service Connector was used to successfully configure the local Docker client. +WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json. +... +The 'dockerhub' Docker Service Connector was used to configure the local client. ``` #### Stack Components Usage -The Docker Service Connector can be utilized by all Container Registry stack components to authenticate with remote registries, eliminating the need for explicit Docker credentials in the environment. +The Docker Service Connector allows all Container Registry stack components to authenticate with remote Docker/OCI registries, enabling image building and publishing without explicit Docker credentials in the environment. -**Warning**: ZenML currently does not support automatic Docker credential configuration in container runtimes like Kubernetes. This feature is planned for future releases. +**Warning**: ZenML does not currently support automatic Docker credential configuration in container runtimes like Kubernetes. This feature will be added in a future release. ================================================== @@ -16390,329 +16407,293 @@ The Docker Service Connector can be utilized by all Container Registry stack com ### Summary of GCP Service Connector Documentation -#### Overview -The ZenML GCP Service Connector enables seamless authentication and access to various GCP resources like GCS buckets, GKE clusters, and GCR registries. It supports multiple authentication methods, including GCP user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication. By default, it issues short-lived OAuth 2.0 tokens for enhanced security. +The **GCP Service Connector** in ZenML enables seamless authentication and access to various Google Cloud Platform (GCP) resources, including GCS buckets, GKE clusters, and GCR registries. It supports multiple authentication methods, such as user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication, prioritizing security by issuing short-lived OAuth 2.0 tokens by default. -#### Key Features -- **Authentication Methods**: Supports GCP user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication. -- **Resource Types**: +#### Key Features: +- **Resource Types**: - **Generic GCP Resource**: Connects to any GCP service using OAuth 2.0 tokens. - **GCS Bucket**: Requires specific permissions (e.g., `storage.buckets.list`, `storage.objects.create`). - **GKE Cluster**: Requires permissions like `container.clusters.list`. - - **GAR/GCR Registry**: Supports both Google Artifact Registry and legacy Google Container Registry with specific permissions. + - **GAR/GCR Registry**: Supports both Artifact Registry and legacy GCR with specific permissions. -#### Prerequisites -- Install the GCP Service Connector via: - - `pip install "zenml[connectors-gcp]"` for standalone installation. - - `zenml integration install gcp` for full integration. -- GCP CLI installation is recommended for auto-configuration. +- **Authentication Methods**: + - **Implicit Authentication**: Automatically discovers credentials but is disabled by default due to security risks. + - **GCP User Account**: Generates temporary tokens from user credentials. + - **GCP Service Account**: Similar to user accounts but uses service account keys. + - **Service Account Impersonation**: Generates temporary tokens by impersonating another service account. + - **External Account (Workload Identity)**: Uses credentials from AWS IAM or Azure AD. + - **OAuth 2.0 Token**: Requires manual token management. -#### Command Examples -- **List Connector Types**: - ```shell +#### Prerequisites: +- Install the GCP integration via: + ```bash + pip install "zenml[connectors-gcp]" + ``` + or + ```bash + zenml integration install gcp + ``` + +#### Commands: +- **List Service Connector Types**: + ```bash zenml service-connector list-types --type gcp ``` - **Register Service Connector**: - ```shell - zenml service-connector register gcp-implicit --type gcp --auth-method implicit --auto-configure + ```bash + zenml service-connector register --type gcp --auth-method --auto-configure ``` - **Describe Service Connector**: - ```shell - zenml service-connector describe gcp-implicit + ```bash + zenml service-connector describe ``` -#### Authentication Methods -1. **Implicit Authentication**: Automatically discovers credentials from environment variables or GCP CLI. -2. **User Account**: Uses long-lived credentials, generating temporary OAuth tokens. -3. **Service Account**: Similar to user accounts but uses service account keys. -4. **Impersonation**: Generates temporary credentials by impersonating another service account. -5. **External Account**: Uses workload identity federation to authenticate with external providers. -6. **OAuth 2.0 Token**: Requires manual token management. - -#### Local Client Provisioning -The connector can configure local clients like `gcloud`, `kubectl`, and Docker CLI with short-lived credentials. This allows for secure access without storing sensitive information. +- **Verify Access to Resources**: + ```bash + zenml service-connector verify --resource-type + ``` -#### Stack Components Integration -The GCP Service Connector can be linked to various ZenML stack components, including: -- GCS Artifact Store -- Kubernetes Orchestrator -- Google Artifact Registry -- GCP Image Builder +- **Login for Local Client Configuration**: + ```bash + zenml service-connector login --resource-type --resource-id + ``` -#### End-to-End Examples -1. **Multi-Type GCP Service Connector**: Connects multiple resources for a unified stack. -2. **Single-Instance Connectors**: Each stack component uses its dedicated service connector. +#### Example End-to-End Workflow: +1. Configure local GCP CLI and install ZenML integration. +2. Register a multi-type GCP Service Connector. +3. Connect various Stack Components (e.g., GCS Artifact Store, GKE Orchestrator) to the registered connector. +4. Run a simple pipeline to validate the setup. -#### Conclusion -The GCP Service Connector provides a robust framework for integrating ZenML with GCP services, ensuring secure and efficient access to cloud resources while supporting various authentication methods and resource types. +This documentation provides comprehensive guidance on integrating ZenML with GCP services, ensuring secure and efficient access to cloud resources. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/best-security-practices.md === -### Summary of Best Practices for Service Connector Authentication Methods +### Summary of Best Practices for Authentication Methods in Service Connectors -This documentation outlines best practices for various authentication methods used by Service Connectors, particularly for cloud providers. It provides guidelines for selecting appropriate authentication methods based on security considerations. +Service Connectors provide various authentication methods, particularly for cloud providers. While no unified authentication standard exists, identifiable patterns can guide the selection of authentication methods. #### Key Authentication Methods 1. **Username and Password** - - Avoid using primary account passwords for authentication. Use alternative credentials like session tokens or API keys whenever possible. - - Passwords are the least secure method and should not be shared or used for automated workloads. + - Avoid using primary account passwords for authentication. + - Use alternative credentials like session tokens or API keys when possible. + - Passwords should not be shared or used for automated workloads. 2. **Implicit Authentication** - - Provides immediate access to cloud resources using locally stored credentials (e.g., environment variables, configuration files). - - Disabled by default due to security risks; must be explicitly enabled. - - Not recommended for portability or reproducibility. + - Provides immediate access to cloud resources without configuration. + - Carries security risks, as it may grant access to resources configured for the ZenML Server. + - Disabled by default; can be enabled via environment variables or Helm chart settings. + - Utilizes locally stored credentials and environment variables. 3. **Long-lived Credentials (API Keys, Account Keys)** - - Ideal for production use, especially when combined with mechanisms for generating short-lived tokens or impersonating accounts. - - Long-lived credentials are exchanged for temporary tokens to enhance security. - - Different cloud providers have specific implementations (e.g., AWS Access Keys, GCP Service Account Credentials). + - Preferred for production use, especially when sharing results. + - Cloud platforms do not use passwords directly; instead, they exchange them for long-lived credentials. + - Different cloud providers have their own terminology for these credentials (e.g., AWS Access Keys, GCP Service Account Credentials). 4. **Generating Temporary and Down-scoped Credentials** - - Temporary credentials limit exposure by issuing tokens with a defined lifetime. - - Down-scoped credentials restrict permissions to only what is necessary for specific tasks. + - Temporary credentials are issued from long-lived credentials, enhancing security by limiting exposure. + - Down-scoped credentials restrict permissions to the minimum required for specific resources. 5. **Impersonating Accounts and Assuming Roles** - - Requires setup of multiple accounts/roles but offers flexibility and control. - - Long-lived credentials are used to obtain short-lived tokens with limited permissions. + - Involves configuring Service Connectors with long-lived credentials tied to a primary account. + - Requires provisioning secondary accounts or roles with necessary permissions. + - The Service Connector exchanges long-lived credentials for short-lived tokens with restricted permissions. 6. **Short-lived Credentials** - - Temporary credentials can be manually generated or automatically configured. + - Temporary credentials that expire, making them impractical for long-term use. - Useful for granting temporary access without exposing long-lived credentials. -#### Examples +### Example Commands -- **GCP Implicit Authentication Example:** +- **Registering a GCP Implicit Authentication Service Connector:** ```sh zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core ``` -- **AWS Temporary Credentials Example:** +- **Registering an AWS Service Connector with Federation Token:** ```sh - AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token + zenml service-connector register aws-federation-multi --type aws --auth-method=federation-token --auto-configure ``` -#### Best Practices -- Use service credentials over user credentials to enforce the least-privilege principle. -- Avoid sharing long-lived credentials; use temporary credentials or impersonation for access. -- Be cautious with implicit authentication due to its security implications. +- **Registering a GCP Service Connector with Account Impersonation:** + ```sh + zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl + ``` -This summary provides a concise overview of the authentication methods and best practices for configuring Service Connectors securely. +### Conclusion +Choosing the right authentication method for Service Connectors is crucial for security and usability. Long-lived credentials, temporary tokens, and impersonation strategies provide flexibility while minimizing risks. Always consider the implications of each method on portability and security. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/aws-service-connector.md === -### AWS Service Connector Overview +### Summary of AWS Service Connector Documentation -The **ZenML AWS Service Connector** enables authentication and access to various AWS resources, such as S3 buckets, EKS clusters, and ECR registries. It supports multiple authentication methods, including long-lived AWS secret keys, IAM roles, STS tokens, and implicit authentication. The connector can generate temporary STS tokens with minimal permissions for security. +**Overview**: The AWS Service Connector in ZenML enables secure connections to AWS resources such as S3 buckets, EKS clusters, and ECR registries. It supports various authentication methods, including AWS secret keys, IAM roles, STS tokens, and implicit authentication. #### Key Features: -- **Resource Types**: Supports generic AWS resources, S3 buckets, EKS clusters, and ECR registries. -- **Authentication Methods**: - - **Implicit Authentication**: Uses environment variables or local AWS CLI configurations. - - **AWS Secret Key**: Long-lived credentials, not recommended for production. - - **AWS STS Token**: Temporary tokens, requires regular updates. - - **AWS IAM Role**: Generates temporary STS credentials by assuming a role. - - **AWS Session Token**: Generates temporary session tokens for IAM users. - - **AWS Federation Token**: Generates temporary tokens for federated users. +- **Authentication Methods**: + - **Implicit Authentication**: Uses environment variables or IAM roles; disabled by default for security. + - **AWS Secret Key**: Long-lived credentials; not recommended for production. + - **AWS STS Token**: Temporary tokens; requires manual renewal. + - **AWS IAM Role**: Assumes roles to generate temporary credentials. + - **AWS Session Token**: Generates temporary tokens for IAM users. + - **AWS Federation Token**: Generates tokens for federated users. -#### Prerequisites: -- Install ZenML with AWS support: - ```bash +- **Resource Types**: + - **Generic AWS Resource**: Connects to any AWS service using a boto3 session. + - **S3 Bucket**: Requires specific IAM permissions for S3 operations. + - **EKS Cluster**: Requires permissions to list and describe clusters. + - **ECR Registry**: Requires permissions for ECR operations. + +#### Configuration and Usage: +- **Prerequisites**: Install ZenML with AWS integration using: + ```shell pip install "zenml[connectors-aws]" ``` -- Optionally, install the AWS CLI for auto-configuration: - ```bash - zenml integration install aws - ``` - -### Resource Types and Permissions - -1. **Generic AWS Resource**: Connects to any AWS service using a pre-configured boto3 session. -2. **S3 Bucket**: Requires specific IAM permissions like `s3:ListBucket`, `s3:GetObject`, etc. -3. **EKS Cluster**: Requires permissions like `eks:ListClusters` and `eks:DescribeCluster`. -4. **ECR Registry**: Requires permissions such as `ecr:DescribeRepositories` and `ecr:PutImage`. - -### Authentication Methods - -- **Implicit Authentication**: Automatically discovers credentials from environment variables or AWS CLI configurations. Requires enabling through environment variable `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. - -- **AWS Secret Key**: Uses long-lived credentials, suitable for development but not recommended for production. - -- **AWS STS Token**: Requires manual token generation and updates. - -- **AWS IAM Role**: Assumes a role to generate temporary STS tokens, recommended for production. - -- **AWS Session Token**: Generates temporary session tokens for IAM users. - -- **AWS Federation Token**: Generates tokens for federated users, requires specific IAM permissions. - -### Auto-Configuration - -The connector can auto-discover credentials from the AWS CLI during registration: -```bash -AWS_PROFILE=connectors zenml service-connector register aws-auto --type aws --auto-configure -``` - -### Local Client Provisioning - -Local AWS CLI, Kubernetes `kubectl`, and Docker CLI can be configured with credentials from the AWS Service Connector. The credentials have a short lifetime and need regular refreshes. - -### Example Commands - -- **List AWS Service Connector Types**: - ```bash - zenml service-connector list-types --type aws + or + ```shell + zenml integration install aws ``` -- **Register a Service Connector**: - ```bash - AWS_PROFILE=connectors zenml service-connector register aws-implicit --type aws --auth-method implicit --region=us-east-1 - ``` +- **Registering a Service Connector**: + - List available AWS service connector types: + ```shell + zenml service-connector list-types --type aws + ``` + - Register a connector with auto-configuration: + ```shell + AWS_PROFILE=connectors zenml service-connector register aws-implicit --type aws --auth-method implicit --region=us-east-1 + ``` -- **Verify Access to Resources**: - ```bash +- **Local Client Configuration**: + - Configure local AWS CLI, Kubernetes, and Docker clients with credentials from the AWS Service Connector. + +#### Example Commands: +- **Verify Access**: + ```shell zenml service-connector verify aws-implicit --resource-type s3-bucket ``` +- **Register Stack Components**: + ```shell + zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles + zenml orchestrator register eks-zenml-zenhacks --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads + zenml container-registry register ecr-us-east-1 --flavor aws --uri=715803424590.dkr.ecr.us-east-1.amazonaws.com + ``` -### End-to-End Example - -1. **Install AWS Integration**: - ```bash - zenml integration install -y aws s3 - ``` +- **Run a Simple Pipeline**: + ```python + from zenml import pipeline, step -2. **Register Multi-Type AWS Service Connector**: - ```bash - AWS_PROFILE=connectors zenml service-connector register aws-demo-multi --type aws --auto-configure - ``` + @step + def step_1() -> str: + return "world" -3. **Register and Connect Stack Components**: - - **S3 Artifact Store**: - ```bash - zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles - zenml artifact-store connect s3-zenfiles --connector aws-demo-multi - ``` + @step(enable_cache=False) + def step_2(input_one: str, input_two: str) -> None: + print(f"{input_one} {input_two}") - - **Kubernetes Orchestrator**: - ```bash - zenml orchestrator register eks-zenml-zenhacks --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads - zenml orchestrator connect eks-zenml-zenhacks --connector aws-demo-multi - ``` + @pipeline + def my_pipeline(): + output_step_one = step_1() + step_2(input_one="hello", input_two=output_step_one) - - **ECR Container Registry**: - ```bash - zenml container-registry register ecr-us-east-1 --flavor aws --uri=715803424590.dkr.ecr.us-east-1.amazonaws.com - zenml container-registry connect ecr-us-east-1 --connector aws-demo-multi - ``` + if __name__ == "__main__": + my_pipeline() + ``` -4. **Run a Simple Pipeline**: - ```python - from zenml import pipeline, step +### Important Notes: +- **MFA Restrictions**: The connector does not work with AWS roles that have Multi-Factor Authentication (MFA) enabled. +- **Security Recommendations**: Use IAM roles or temporary tokens for production environments to minimize security risks. - @step - def step_1() -> str: - return "world" +This summary encapsulates the essential information and commands for configuring and utilizing the AWS Service Connector with ZenML, ensuring that critical details are retained for effective use. - @step(enable_cache=False) - def step_2(input_one: str, input_two: str) -> None: - print(f"{input_one} {input_two}") +================================================== - @pipeline - def my_pipeline(): - output_step_one = step_1() - step_2(input_one="hello", input_two=output_step_one) +=== File: docs/book/how-to/infrastructure-deployment/auth-management/README.md === - if __name__ == "__main__": - my_pipeline() - ``` +### ZenML Service Connectors Overview -This overview provides a concise understanding of the ZenML AWS Service Connector, its configuration, resource types, authentication methods, and practical usage examples. +**Purpose**: ZenML Service Connectors enable seamless integration of ZenML deployments with various cloud providers and infrastructure services, simplifying authentication and authorization processes. -================================================== +#### Key Concepts -=== File: docs/book/how-to/infrastructure-deployment/auth-management/README.md === +- **MLOps Complexity**: MLOps platforms often require connections to multiple external services (e.g., AWS S3, Kubernetes). Managing credentials and permissions can be complex and error-prone. + +- **Service Connectors**: ZenML abstracts the complexity of authentication and authorization through Service Connectors, which act as intermediaries for secure access to resources. -### ZenML Service Connectors Overview +#### Use Case Example -**Purpose**: ZenML Service Connectors facilitate secure connections between ZenML deployments and various cloud providers (AWS, GCP, Azure, Kubernetes, etc.), streamlining the authentication and authorization processes for MLOps platforms. +**Connecting to AWS S3**: +1. **Registering a Service Connector**: + - Use the AWS Service Connector to link ZenML to an S3 bucket. + - Command: + ```sh + zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket + ``` -#### Key Challenges -- **Authentication Complexity**: Configuring secure access to multiple services (e.g., AWS S3, Kubernetes) can be cumbersome and error-prone. -- **Security Risks**: Storing long-lived credentials directly in code or configuration can lead to security vulnerabilities. +2. **Connecting an Artifact Store**: + - Create and connect an S3 Artifact Store to the registered Service Connector: + ```sh + zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles + zenml artifact-store connect s3-zenfiles --connector aws-s3 + ``` -#### Solution: Service Connectors -- **Abstraction**: Service Connectors abstract away the complexity of authentication, allowing users to focus on pipeline development without worrying about security details. -- **Centralized Management**: Credentials are validated on the ZenML server, which issues short-lived tokens to clients, enhancing security. +#### Authentication Methods -#### Example Use Case: Connecting to AWS S3 -1. **List Available Service Connector Types**: - ```sh - zenml service-connector list-types - ``` +Service Connectors support various authentication methods, including: +- **AWS**: Implicit, secret-key, STS-token, IAM-role, session-token, federation-token. +- **Resource Types**: Includes S3 buckets, Kubernetes clusters, Docker registries, etc. -2. **Register an AWS Service Connector**: - ```sh - zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket - ``` +**Example Command to List Service Connector Types**: +```sh +zenml service-connector list-types +``` -3. **Connect an S3 Artifact Store**: - ```sh - zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles - zenml artifact-store connect s3-zenfiles --connector aws-s3 - ``` +**Example Command to Describe AWS Service Connector**: +```sh +zenml service-connector describe-type aws +``` -4. **Run a Simple Pipeline**: - ```python - from zenml import step, pipeline +#### Security Considerations - @step - def simple_step_one() -> str: - return "Hello World!" +- **Best Practices**: Avoid embedding credentials directly in Stack Components. Use Service Connectors to manage credentials securely. +- **Temporary Credentials**: Service Connectors can generate short-lived credentials, reducing security risks associated with long-lived credentials. - @step - def simple_step_two(msg: str) -> None: - print(msg) +#### Example Pipeline - @pipeline - def simple_pipeline() -> None: - message = simple_step_one() - simple_step_two(msg=message) +A simple pipeline can be defined and executed as follows: +```python +from zenml import step, pipeline - if __name__ == "__main__": - simple_pipeline() - ``` +@step +def simple_step_one() -> str: + return "Hello World!" -5. **Execute the Pipeline**: - ```sh - python run.py - ``` +@step +def simple_step_two(msg: str) -> None: + print(msg) -#### Authentication Methods Supported by AWS Service Connector -- **Implicit** -- **Secret Key** -- **STS Token** -- **IAM Role** -- **Session Token** -- **Federation Token** +@pipeline +def simple_pipeline() -> None: + message = simple_step_one() + simple_step_two(msg=message) -#### Security Best Practices -- Use short-lived tokens instead of long-lived credentials. -- Regularly rotate credentials and limit permissions. +if __name__ == "__main__": + simple_pipeline() +``` +Run the pipeline: +```sh +python run.py +``` -#### Additional Resources -- [Service Connector Guide](./service-connectors-guide.md) -- [Security Best Practices](./best-security-practices.md) -- [AWS Service Connector Documentation](./aws-service-connector.md) -- [GCP Service Connector Documentation](./gcp-service-connector.md) -- [Azure Service Connector Documentation](./azure-service-connector.md) +### Conclusion -ZenML's Service Connectors provide a robust framework for securely connecting to external services, simplifying the MLOps workflow while adhering to best security practices. +ZenML Service Connectors streamline the process of connecting to various cloud resources while implementing security best practices. They abstract the complexities of authentication, making it easier to manage MLOps workflows. For further details, refer to the complete guide on Service Connectors and security best practices. ================================================== @@ -16720,37 +16701,43 @@ ZenML's Service Connectors provide a robust framework for securely connecting to ### Summary: Referencing Secrets in Stack Configuration -**Overview**: Secret references allow secure configuration of stack components that require sensitive information (e.g., passwords, tokens) without hard-coding values. Use the syntax `{{.}}` to reference secrets. +In ZenML, sensitive information such as passwords or tokens can be securely referenced in stack components using secret references. Instead of hard-coding values, you specify the secret name and key with the syntax: `{{.}}`. -**Example Usage**: -1. **Creating a Secret**: - ```shell - zenml secret create mlflow_secret \ - --username=admin \ - --password=abc123 - ``` +#### Example: Registering and Using Secrets -2. **Referencing Secrets in a Component**: - ```shell - zenml experiment-tracker register mlflow \ - --flavor=mlflow \ - --tracking_username={{mlflow_secret.username}} \ - --tracking_password={{mlflow_secret.password}} \ - ... - ``` +To register a secret and use it in an experiment tracker: -**Validation**: ZenML validates the existence of referenced secrets and keys before running a pipeline to prevent runtime failures. The validation level can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: -- `NONE`: Disables validation. -- `SECRET_EXISTS`: Validates only the existence of secrets. -- `SECRET_AND_KEY_EXISTS`: (default) Validates both secrets and key-value pairs. +```shell +# Create a secret named `mlflow_secret` with username and password +zenml secret create mlflow_secret --username=admin --password=abc123 + +# Reference the secret in the experiment tracker +zenml experiment-tracker register mlflow \ + --flavor=mlflow \ + --tracking_username={{mlflow_secret.username}} \ + --tracking_password={{mlflow_secret.password}} \ + ... +``` + +#### Secret Validation + +ZenML validates the existence of referenced secrets and keys before running a pipeline to prevent runtime failures. The validation level can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: + +- **NONE**: Disables validation. +- **SECRET_EXISTS**: Validates only the existence of secrets. +- **SECRET_AND_KEY_EXISTS**: (Default) Validates both the secret and key-value pair existence. + +#### Fetching Secret Values in Steps + +Secrets can be accessed within steps using the ZenML `Client` API: -**Fetching Secrets in Steps**: Access secrets directly within steps using the ZenML `Client` API: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: + """Load the example secret from the server.""" secret = Client().get_secret() authenticate_to_some_api( username=secret.secret_values["username"], @@ -16758,22 +16745,29 @@ def secret_loader() -> None: ) ``` -**Further Reading**: For more on managing secrets, refer to the [Interact with secrets](../../../how-to/project-setup-and-management/interact-with-secrets.md) documentation. +### Additional Resources + +- For more on managing secrets, refer to the [Interact with secrets](../../../how-to/project-setup-and-management/interact-with-secrets.md) documentation. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/export-stack-requirements.md === -### Exporting Stack Requirements +### Summary of Export Stack Requirements Documentation To obtain the `pip` requirements for a specific stack, use the following CLI command: ```bash zenml stack export-requirements --output-file stack_requirements.txt +``` + +After exporting, install the requirements with: + +```bash pip install -r stack_requirements.txt ``` -This command exports the requirements to a file named `stack_requirements.txt`, which can then be used to install the necessary packages. +This process ensures that all necessary dependencies for the specified stack are captured and installed efficiently. ================================================== @@ -16782,54 +16776,53 @@ This command exports the requirements to a file named `stack_requirements.txt`, # Custom Stack Component Flavor in ZenML ## Overview -ZenML allows for the creation of custom stack component flavors, enhancing composability and reusability in MLOps platforms. This guide covers the essentials of defining and implementing custom flavors. +ZenML allows for the creation of custom stack component flavors to tailor MLOps solutions. This guide explains the concept of flavors and how to implement custom ones. ## Component Flavors -- **Component Type**: Broad category defining functionality (e.g., `artifact_store`). -- **Flavors**: Specific implementations of a component type (e.g., `local`, `s3`). +- **Component Type**: A broad category defining functionality (e.g., `artifact_store`). +- **Flavor**: A specific implementation of a component type (e.g., `local`, `s3`). ## Core Abstractions 1. **StackComponent**: Defines core functionality. Example: - ```python - from zenml.stack import StackComponent + ```python + from zenml.stack import StackComponent - class BaseArtifactStore(StackComponent): - @abstractmethod - def open(self, path, mode="r"): - pass + class BaseArtifactStore(StackComponent): + @abstractmethod + def open(self, path, mode="r"): + pass - @abstractmethod - def exists(self, path): - pass - ``` + @abstractmethod + def exists(self, path): + pass + ``` -2. **StackComponentConfig**: Configures stack component instances, separating static and dynamic configurations. - ```python - from zenml.stack import StackComponentConfig +2. **StackComponentConfig**: Configures a stack component instance, separating configuration from implementation. + ```python + from zenml.stack import StackComponentConfig - class BaseArtifactStoreConfig(StackComponentConfig): - path: str - SUPPORTED_SCHEMES: ClassVar[Set[str]] - ``` + class BaseArtifactStoreConfig(StackComponentConfig): + path: str + SUPPORTED_SCHEMES: ClassVar[Set[str]] + ``` 3. **Flavor**: Combines `StackComponent` and `StackComponentConfig`, defining the flavor's name and type. - ```python - from zenml.stack import Flavor - from zenml.enums import StackComponentType + ```python + from zenml.stack import Flavor - class LocalArtifactStoreFlavor(Flavor): - @property - def name(self) -> str: - return "local" + class LocalArtifactStoreFlavor(Flavor): + @property + def name(self) -> str: + return "local" - @property - def type(self) -> StackComponentType: - return StackComponentType.ARTIFACT_STORE - ``` + @property + def type(self) -> StackComponentType: + return StackComponentType.ARTIFACT_STORE + ``` ## Implementing a Custom Flavor -### Configuration Class -Define the configuration class for your custom flavor: +### Step 1: Define Configuration Class +Define `SUPPORTED_SCHEMES` and additional configuration values. ```python from zenml.artifact_stores import BaseArtifactStoreConfig from zenml.utils.secret_utils import SecretField @@ -16837,26 +16830,20 @@ from zenml.utils.secret_utils import SecretField class MyS3ArtifactStoreConfig(BaseArtifactStoreConfig): SUPPORTED_SCHEMES: ClassVar[Set[str]] = {"s3://"} key: Optional[str] = SecretField(default=None) - secret: Optional[str] = SecretField(default=None) + # Additional fields... ``` -### Implementation Class -Implement the functionality using the S3 file system: +### Step 2: Implement the Artifact Store +Implement the abstract methods. ```python import s3fs from zenml.artifact_stores import BaseArtifactStore class MyS3ArtifactStore(BaseArtifactStore): - _filesystem: Optional[s3fs.S3FileSystem] = None - @property def filesystem(self) -> s3fs.S3FileSystem: - if not self._filesystem: - self._filesystem = s3fs.S3FileSystem( - key=self.config.key, - secret=self.config.secret, - ) - return self._filesystem + # Logic to initialize S3 filesystem + pass def open(self, path, mode="r"): return self.filesystem.open(path=path, mode=mode) @@ -16865,8 +16852,8 @@ class MyS3ArtifactStore(BaseArtifactStore): return self.filesystem.exists(path=path) ``` -### Flavor Class -Combine the configuration and implementation: +### Step 3: Define the Flavor +Combine the configuration and implementation classes. ```python from zenml.artifact_stores import BaseArtifactStoreFlavor @@ -16885,55 +16872,60 @@ class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor): ``` ## Registering the Flavor -Register the new flavor using the ZenML CLI: +Use the ZenML CLI to register your flavor: ```shell zenml artifact-store flavor register flavors.my_flavor.MyS3ArtifactStoreFlavor ``` ## Usage -After registration, use the custom flavor in your stacks: +Register the artifact store and stack: ```shell -zenml artifact-store register \ - --flavor=my_s3_artifact_store \ - --path='some-path' - -zenml stack register \ - --artifact-store +zenml artifact-store register --flavor=my_s3_artifact_store --path='some-path' +zenml stack register --artifact-store ``` ## Best Practices - Execute `zenml init` consistently. -- Use the ZenML CLI to check required configuration values. -- Test thoroughly before production use. +- Use the CLI to check required configuration values. +- Test flavors thoroughly before production use. - Keep code clean and well-documented. ## Further Learning -For specific stack component types, refer to the relevant documentation links for orchestrators, artifact stores, container registries, etc. +For specific stack component types, refer to the following: +- [Orchestrator](../../../component-guide/orchestrators/custom.md) +- [Artifact Store](../../../component-guide/artifact-stores/custom.md) +- [Container Registry](../../../component-guide/container-registries/custom.md) +- [Step Operator](../../../component-guide/step-operators/custom.md) +- [Model Deployer](../../../component-guide/model-deployers/custom.md) +- [Feature Store](../../../component-guide/feature-stores/custom.md) +- [Experiment Tracker](../../../component-guide/experiment-trackers/custom.md) +- [Alerter](../../../component-guide/alerters/custom.md) +- [Annotator](../../../component-guide/annotators/custom.md) +- [Data Validator](../../../component-guide/data-validators/custom.md) ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md === -### Summary: Deploying a Cloud Stack with Terraform +### Summary: Deploy a Cloud Stack with Terraform -ZenML provides a collection of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) for provisioning cloud resources and integrating them with ZenML Stacks. These modules facilitate efficient and scalable deployment of machine learning infrastructure. +ZenML provides a collection of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to simplify the provisioning of cloud resources and integrate them with ZenML Stacks, enhancing machine learning infrastructure deployment. -#### Prerequisites +#### Pre-requisites - A deployed ZenML server instance accessible from the cloud provider. -- Create a service account and API key for programmatic access to the ZenML server using: +- Create a service account and API key for the ZenML server using: ```shell zenml service-account create ``` -- Install Terraform (version 1.9 or higher) on your machine. -- Authenticate with your cloud provider using their CLI or SDK. +- Install Terraform (version 1.9 or higher) and authenticate with your cloud provider's CLI. #### Using Terraform Modules -1. Set up the ZenML provider with your server URL and API key using environment variables: +1. Set up the ZenML Terraform provider with environment variables: ```shell export ZENML_SERVER_URL="https://your-zenml-server.com" export ZENML_API_KEY="" ``` -2. Create a Terraform configuration file (`main.tf`): +2. Create a Terraform configuration file (e.g., `main.tf`): ```hcl terraform { required_providers { @@ -16951,10 +16943,10 @@ ZenML provides a collection of [Terraform modules](https://registry.terraform.io } output "zenml_stack_id" { - value = module.zenml_stack.zenml_stack_id + value = module.zenml_stack.zenml_stack_id } output "zenml_stack_name" { - value = module.zenml_stack.zenml_stack_name + value = module.zenml_stack.zenml_stack_name } ``` 3. Run the following commands: @@ -16963,7 +16955,6 @@ ZenML provides a collection of [Terraform modules](https://registry.terraform.io terraform apply ``` 4. Confirm changes by typing `yes` when prompted. - 5. After provisioning, use: ```shell zenml integration install @@ -16971,31 +16962,17 @@ ZenML provides a collection of [Terraform modules](https://registry.terraform.io ``` #### Cloud Provider Specifics -- **AWS**: Requires AWS CLI and configuration. Example configuration: - ```hcl - provider "aws" { region = "eu-central-1" } - ``` - Key components include S3 Artifact Store, ECR Container Registry, and various orchestrators. - -- **GCP**: Requires `gcloud` CLI. Example configuration: - ```hcl - provider "google" { region = "europe-west3"; project = "my-project" } - ``` - Key components include GCS Artifact Store and Vertex AI Orchestrator. - -- **Azure**: Requires Azure CLI. Example configuration: - ```hcl - provider "azurerm" { features {} } - ``` - Key components include Azure Storage Account and AzureML Orchestrator. +- **AWS**: Requires AWS CLI configured with `aws configure`. Example configuration includes S3, ECR, and various orchestrators. +- **GCP**: Requires `gcloud` CLI configured with `gcloud init`. Configuration includes GCS, Google Artifact Registry, and orchestrators. +- **Azure**: Requires Azure CLI configured with `az login`. Configuration includes Azure Storage and Azure Container Registry. #### Cleanup -To remove all resources provisioned by Terraform, run: +To remove resources provisioned by Terraform, run: ```shell terraform destroy ``` -This documentation provides a comprehensive guide to deploying a cloud stack with ZenML using Terraform, covering prerequisites, configuration, and cleanup processes. +This command will delete all resources and unregister the ZenML stack from the ZenML server. ================================================== @@ -17003,234 +16980,219 @@ This documentation provides a comprehensive guide to deploying a cloud stack wit # Deploy a Cloud Stack with a Single Click -ZenML's **stack** concept represents your infrastructure configuration. Traditionally, creating a stack involves deploying infrastructure components and defining them in ZenML, which can be complex and time-consuming. To simplify this, ZenML offers a **1-click deployment feature** that allows you to deploy infrastructure on your chosen cloud provider quickly. +In ZenML, a **stack** represents your infrastructure configuration. Traditionally, creating a stack involves deploying infrastructure components and defining them in ZenML, which can be complex and time-consuming. ZenML now offers a **1-click deployment feature** to simplify this process, allowing you to deploy infrastructure on your chosen cloud provider effortlessly. ## Prerequisites -- A deployed instance of ZenML (not a local server). Set up instructions can be found [here](../../../getting-started/deploying-zenml/README.md). +- A deployed instance of ZenML (not local via `zenml login --local`). For setup instructions, refer to the [ZenML deployment guide](../../../getting-started/deploying-zenml/README.md). ## Using the 1-Click Deployment Tool -### Dashboard +### Dashboard Method 1. Navigate to the stacks page and click "+ New Stack". 2. Select "New Infrastructure". -3. Choose your cloud provider (AWS, GCP, Azure) and configure the stack. +3. Choose your cloud provider (AWS, GCP, or Azure) and configure the stack (region, name, etc.). +4. Follow the provider-specific instructions for deployment. #### AWS -- Select a region and stack name. -- Complete the configuration and click "Deploy in AWS" to be redirected to the AWS CloudFormation page. Log in, review, and create the stack. +- Configure the stack and click "Deploy in AWS". +- Log in to AWS, review the CloudFormation configuration, and create the stack. #### GCP -- Select a region and stack name. -- Click "Deploy in GCP" to start a Cloud Shell session. Trust the ZenML repository, authenticate, and configure your deployment using provided values. Run the deployment script to provision resources and register the stack. +- Configure the stack and click "Deploy in GCP". +- Log in to GCP Cloud Shell, review the scripts, authenticate, and run the deployment script. #### Azure -- Select a location and stack name. -- Click "Deploy in Azure" to start a Cloud Shell session. Paste the `main.tf` content, run `terraform init --upgrade` and `terraform apply` to deploy resources and register the stack. +- Configure the stack and click "Deploy in Azure". +- Use Azure Cloud Shell to paste the `main.tf` file, then run `terraform init --upgrade` and `terraform apply`. -### CLI +### CLI Method To create a remote stack via CLI, use: ```shell zenml stack deploy -p {aws|gcp|azure} ``` -### Provider-Specific Steps -- **AWS**: Follow prompts to deploy a CloudFormation stack. -- **GCP**: Follow prompts to deploy a Deployment Manager template. -- **Azure**: Use the ZenML Azure Stack Terraform module for deployment. - -## What Will Be Deployed? - +## Deployment Overview ### AWS Resources -- S3 bucket as ZenML Artifact Store. -- ECR container registry as ZenML Container Registry. -- CloudBuild project as ZenML Image Builder. -- IAM roles and permissions for SageMaker and other services. +- S3 bucket (Artifact Store) +- ECR container registry (Container Registry) +- CloudBuild project (Image Builder) +- IAM user/role with necessary permissions. ### GCP Resources -- GCS bucket as ZenML Artifact Store. -- GCP Artifact Registry as ZenML Container Registry. -- Permissions for Vertex AI and Cloud Build. +- GCS bucket (Artifact Store) +- GCP Artifact Registry (Container Registry) +- Vertex AI (Orchestrator and Step Operator) +- GCP Service Account with necessary permissions. ### Azure Resources -- Azure Resource Group containing: - - Storage Account and Blob Storage for Artifact Store. - - Container Registry for Container Registry. - - AzureML Workspace for Orchestrator and Step Operator. +- Azure Resource Group +- Azure Storage Account (Artifact Store) +- Azure Container Registry (Container Registry) +- AzureML Workspace (Orchestrator and Step Operator) - Azure Service Principal with necessary permissions. -With this feature, you can easily deploy a cloud stack and start running your pipelines in a remote environment. +## Conclusion +With the 1-click deployment feature, you can quickly set up a cloud stack and begin running your pipelines in a remote environment. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/README.md === -### Managing Stacks & Components +# Managing Stacks & Components in ZenML -#### What is a Stack? -A **stack** in ZenML represents the configuration of infrastructure and tooling for pipeline execution. It consists of various components, each serving a specific function, such as: -- **Container Registry** -- **Kubernetes Cluster** (as an orchestrator) -- **Artifact Store** -- **Experiment Tracker** (e.g., MLflow) +## What is a Stack? +A **stack** in ZenML represents the configuration of infrastructure and tooling for executing pipelines. It consists of various components, each handling specific tasks, such as: +- **Container Registry**: For managing container images. +- **Kubernetes Cluster**: As an orchestrator for deploying models. +- **Artifact Store**: For storing artifacts. +- **Experiment Tracker**: E.g., MLflow for tracking experiments. -#### Organizing Execution Environments -ZenML allows running pipelines across multiple stacks, facilitating testing in different environments. For instance: -1. A data scientist experiments locally. -2. They transition to a staging cloud environment for advanced testing. -3. Finally, they deploy to a production-grade stack. +## Organizing Execution Environments +ZenML allows running pipelines across multiple stacks, facilitating testing in different environments: +- **Local Development**: Data scientists can experiment locally. +- **Staging**: Test advanced features in a cloud environment. +- **Production**: Deploy fully tested pipelines in a production-grade stack. -Benefits of separate stacks include: -- Preventing accidental production deployments. -- Reducing costs by using less powerful resources for staging. -- Controlling access by assigning permissions to specific stacks. +### Benefits of Separate Stacks +- Prevents accidental deployment of staging pipelines to production. +- Reduces costs by using less powerful resources in staging. +- Controls access by assigning permissions to specific stacks. -#### Managing Credentials -Stack components often require credentials for infrastructure interaction. ZenML recommends using **Service Connectors** to manage these credentials securely. +## Managing Credentials +Most stack components require credentials for infrastructure interaction. ZenML uses **Service Connectors** to manage these credentials securely, abstracting sensitive information from users. -**Recommended Roles:** -- Limit Service Connector creation to users with direct access to cloud resources to minimize credential leaks and simplify auditing. +### Recommended Roles +- Limit Service Connector creation to personnel with direct cloud resource access to minimize credential leaks and facilitate auditing. +- Create a connector for development/staging and another for production to ensure safe resource usage. -**Recommended Workflow:** -1. A few trusted individuals create Service Connectors. -2. Use a connector for development/staging. -3. Create a separate connector for production to avoid accidental resource usage. +### Recommended Workflow +1. Restrict Service Connector creation to a few trusted individuals. +2. Use a single connector for development/staging. +3. Create a separate connector for production when ready. -#### Deploying and Managing Stacks +## Deploying and Managing Stacks Deploying MLOps stacks can be complex due to: -- Specific tool requirements (e.g., Kubernetes for Kubeflow). -- Challenges in setting default infrastructure parameters. -- Potential issues with standard installations (e.g., custom service accounts for Vertex AI). -- Ensuring components have the right permissions to communicate. -- The need for resource cleanup post-experimentation. - -ZenML aims to simplify stack provisioning, configuration, and extension. - -#### Key Documentation Links +- Specific requirements for tools (e.g., Kubeflow needs a Kubernetes cluster). +- Difficulty in setting reasonable infrastructure parameters. +- Potential issues with standard tool installations (e.g., custom service accounts for Vertex AI). +- Need for proper permissions between components. +- Challenges in cleaning up resources post-experimentation. + +### Documentation for Stack Management +ZenML provides guidance on provisioning, configuring, and extending stacks: - [Deploy a Cloud Stack](./deploy-a-cloud-stack.md) - [Register a Cloud Stack](./register-a-cloud-stack.md) - [Deploy with Terraform](./deploy-a-cloud-stack-with-terraform.md) - [Export Stack Requirements](./export-stack-requirements.md) - [Reference Secrets in Configuration](./reference-secrets-in-stack-configuration.md) -- [Implement a Custom Stack Component](./implement-a-custom-stack-component.md) +- [Implement Custom Stack Components](./implement-a-custom-stack-component.md) + +This documentation aims to simplify the process of managing complex stacks, making it easier to run ML pipelines effectively. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md === -### Summary of ZenML Cloud Stack Registration Documentation - -**Overview**: ZenML's stack represents the configuration of your infrastructure. Traditionally, creating a stack involves deploying infrastructure and defining stack components with authentication, which can be complex. The **stack wizard** simplifies this by allowing users to register a ZenML cloud stack using existing infrastructure. - -**Deployment Options**: -- **1-Click Deployment Tool**: For users without existing infrastructure. -- **Terraform Modules**: For users wanting more control over resource provisioning. - -### Using the Stack Wizard - -**Access**: Available via CLI and dashboard. - -#### Dashboard Steps: -1. Navigate to the stacks page and click "+ New Stack". -2. Select "Use existing Cloud" and choose your cloud provider. -3. Choose an authentication method and fill in the required fields. +### Summary of ZenML Stack Wizard Documentation -**Authentication Methods**: -- **AWS**: Options include AWS Secret Key, STS Token, IAM Role, Session Token, Federation Token. -- **GCP**: Options include User Account, Service Account, External Account, OAuth 2.0 Token, Service Account Impersonation. -- **Azure**: Options include Service Principal and Access Token. +**Overview:** +ZenML's stack represents the configuration of your infrastructure, typically requiring deployment and definition of stack components. The Stack Wizard simplifies the registration of a ZenML cloud stack using existing infrastructure, alleviating the challenges of remote setups. -After authentication, ZenML will display available resources to create stack components (artifact store, orchestrator, container registry). +**Deployment Options:** +- **1-Click Deployment Tool:** For users without existing infrastructure. +- **Terraform Modules:** For users wanting more control over resource provisioning. -#### CLI Command: -To register a remote stack: -```shell -zenml stack register -p {aws|gcp|azure} -``` -You can specify an existing service connector with `-sc `. +**Using the Stack Wizard:** +Available via CLI and Dashboard. -### Service Connector Configuration -1. The wizard checks for existing credentials in the local environment. -2. If found, you can use them or opt for manual configuration. +1. **Dashboard:** + - Access the Stack Wizard through the stacks page. + - Click "+ New Stack" and select "Use existing Cloud." + - Choose your cloud provider and authentication method. + - Complete required fields for authentication. -**Example Prompt**: -``` -AWS cloud service connector has detected connection credentials in your environment. -Would you like to use these credentials or create a new configuration by providing connection details? [y/n] (y): -``` +2. **CLI:** + - Command to register a remote stack: + ```shell + zenml stack register -p {aws|gcp|azure} + ``` + - Specify a service connector using `-sc `. -### Defining Cloud Components +**Authentication Methods:** +- **AWS:** + - Options include AWS Secret Key, STS Token, IAM Role, Session Token, Federation Token. +- **GCP:** + - Options include User Account, Service Account, External Account, OAuth 2.0 Token, Service Account Impersonation. +- **Azure:** + - Options include Service Principal and Access Token. + +**Defining Stack Components:** You will define three main components: - **Artifact Store** - **Orchestrator** - **Container Registry** -For each component, you can choose to reuse existing components or create new ones based on available resources. +For each component, you can: +- Reuse existing components. +- Create new components from available resources. -**Example Command Output for Orchestrator**: -``` -Available orchestrator -┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ -┃ Choice ┃ Name ┃ -┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ -│ [0] │ Create a new orchestrator │ -│ [1] │ existing_orchestrator_1 │ -│ [2] │ existing_orchestrator_2 │ -└──────────────────┴────────────────────────────────────────────────────┘ -``` +**Final Steps:** +After defining the components, ZenML will create and register the stack, enabling you to run pipelines in a remote setting. -### Conclusion -By following the wizard, users can register a cloud stack and start running pipelines in a remote setting efficiently. +This documentation provides a streamlined approach to registering a cloud stack in ZenML, ensuring users can efficiently leverage existing infrastructure. ================================================== === File: docs/book/how-to/control-logging/enable-or-disable-logs-storing.md === -# ZenML Logging Configuration +### ZenML Logging Overview -ZenML captures logs during step execution using its logging handler, which can capture both Python's logging module and print statements. These logs are stored in the artifact store associated with your stack and can be viewed on the dashboard. However, if not connected to a cloud artifact store with a service connector, logs won't be visible. +ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will log and store. -## Code Example for Logging +#### Example Code ```python import logging from zenml import step @step def my_step() -> None: - logging.warning("`Hello`") - print("World.") + logging.warning("`Hello`") # Using logging module + print("World.") # Using print statements ``` -## Disabling Log Storage +Logs are stored in the artifact store of your stack and can be viewed on the dashboard. However, logs will not be visible if not connected to a cloud artifact store with a service connector. For more details, refer to the [dashboard logging documentation](./view-logs-on-the-dasbhoard.md). -### 1. Using Decorators -You can disable log storage for specific steps or the entire pipeline using the `enable_step_logs` parameter: -```python -from zenml import pipeline, step +### Disabling Log Storage -@step(enable_step_logs=False) -def my_step() -> None: - ... +To disable log storage, you can: -@pipeline(enable_step_logs=False) -def my_pipeline(): - ... -``` +1. Use the `enable_step_logs` parameter in the `@step` or `@pipeline` decorator: + ```python + from zenml import pipeline, step -### 2. Using Environment Variable -Set the environment variable `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true` in the execution environment. This variable takes precedence over the decorator settings: -```python -docker_settings = DockerSettings(environment={"ZENML_DISABLE_STEP_LOGS_STORAGE": "true"}) + @step(enable_step_logs=False) + def my_step() -> None: + ... -@pipeline(settings={"docker": docker_settings}) -def my_pipeline() -> None: - my_step() + @pipeline(enable_step_logs=False) + def my_pipeline(): + ... + ``` -my_pipeline = my_pipeline.with_options( - settings={"docker": docker_settings} -) -``` +2. Set the environmental variable `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true`, which takes precedence over the above parameters. This variable must be set in the execution environment: + ```python + docker_settings = DockerSettings(environment={"ZENML_DISABLE_STEP_LOGS_STORAGE": "true"}) + + @pipeline(settings={"docker": docker_settings}) + def my_pipeline() -> None: + my_step() -For more details on viewing logs, refer to the documentation on [viewing logs on the dashboard](./view-logs-on-the-dasbhoard.md). + my_pipeline = my_pipeline.with_options( + settings={"docker": docker_settings} + ) + ``` + +This summary provides key information on logging in ZenML, including how to enable, disable, and view logs effectively. ================================================== @@ -17238,7 +17200,7 @@ For more details on viewing logs, refer to the documentation on [viewing logs on # Viewing Logs on the Dashboard -ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will catch and store. +ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will log and store. ```python import logging @@ -17246,18 +17208,20 @@ from zenml import step @step def my_step() -> None: - logging.warning("`Hello`") # Use the logging module. - print("World.") # Use print statements as well. + logging.warning("`Hello`") # Using the logging module. + print("World.") # Using print statements. ``` -Logs are stored in the artifact store of your stack and can be viewed on the dashboard only if the ZenML server has access to the artifact store. This access is true in two scenarios: +Logs are stored in the artifact store of your stack, accessible on the dashboard only if the ZenML server has direct access to it. This is true in two scenarios: 1. **Local ZenML Server**: Both local and remote artifact stores may be accessible based on client configuration. 2. **Deployed ZenML Server**: Logs from runs on a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. -Refer to the production guide for configuring a remote artifact store with a service connector. If configured correctly, logs will appear on the dashboard. +For configuration details, refer to the production guide on [remote artifact stores](../../user-guide/production-guide/remote-storage.md) and [service connectors](../../how-to/infrastructure-deployment/auth-management/service-connectors-guide.md). -**Note**: To disable log storage due to performance or storage limits, follow the provided instructions. +If logs are stored correctly, they will appear on the dashboard. + +**Note**: To disable log storage for performance or storage reasons, follow [these instructions](./enable-or-disable-logs-storing.md). ================================================== @@ -17265,13 +17229,13 @@ Refer to the production guide for configuring a remote artifact store with a ser ### Disabling Rich Traceback Output in ZenML -ZenML uses the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for enhanced traceback output, which aids in debugging pipelines. To disable this feature, set the following environment variable: +ZenML uses the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for enhanced traceback output, which aids in debugging. To disable this feature, set the following environment variable: ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` -This change will only affect local pipeline runs. To disable rich tracebacks for remote pipeline runs, set the `ZENML_ENABLE_RICH_TRACEBACK` variable in the pipeline's environment: +This change affects only local pipeline runs. To disable rich tracebacks for remote pipeline runs, set the environment variable in the pipeline's environment: ```python docker_settings = DockerSettings(environment={"ZENML_ENABLE_RICH_TRACEBACK": "false"}) @@ -17286,7 +17250,7 @@ my_pipeline = my_pipeline.with_options( ) ``` -This ensures that both local and remote runs will display plain text traceback output. +This ensures that both local and remote pipeline executions will display plain text traceback output. ================================================== @@ -17300,7 +17264,7 @@ ZenML uses colorful logging by default for better readability. To disable this f ZENML_LOGGING_COLORS_DISABLED=true ``` -Setting this variable in the client environment (e.g., your local machine) will also disable colorful logging for remote pipeline runs. To disable it locally while keeping it enabled for remote runs, set the variable in the pipeline run environment: +Setting this variable in the client environment (e.g., local machine) will disable colorful logging for remote pipeline runs as well. To disable it only locally while keeping it enabled for remote runs, set the variable in the pipeline run's environment: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) @@ -17315,21 +17279,21 @@ my_pipeline = my_pipeline.with_options( ) ``` -This allows for flexible logging configurations based on the execution environment. +This allows for flexible logging configurations based on the environment. ================================================== === File: docs/book/how-to/control-logging/set-logging-verbosity.md === -### Summary: Setting Logging Verbosity in ZenML +### Setting Logging Verbosity in ZenML -By default, ZenML sets the logging verbosity to `INFO`. To change this, set the environment variable `ZENML_LOGGING_VERBOSITY` to your desired level: +ZenML defaults to `INFO` logging verbosity. To change it, set the environment variable: ```bash export ZENML_LOGGING_VERBOSITY=INFO ``` -Available levels are `INFO`, `WARN`, `ERROR`, `CRITICAL`, and `DEBUG`. Note that changing this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To set logging verbosity for remote runs, configure it in the pipeline's environment: +Available levels: `INFO`, `WARN`, `ERROR`, `CRITICAL`, `DEBUG`. Note that changing this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To set logging verbosity for remote runs, configure it in the pipeline environment: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG"}) @@ -17338,13 +17302,11 @@ docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG" def my_pipeline() -> None: my_step() -# Alternatively, configure options +# Or configure options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) -``` - -This ensures that the specified logging level is applied to both local and remote pipeline executions. +``` ================================================== @@ -17352,13 +17314,13 @@ This ensures that the specified logging level is applied to both local and remot ### Configuring ZenML's Default Logging Behavior -ZenML generates different types of logs across various components: +ZenML generates different types of logs across various environments: -1. **ZenML Server Logs**: Produced by the ZenML Server, similar to logs from any FastAPI server. -2. **Client or Runner Logs**: Generated in the client or runner environment, typically during pipeline execution, including logs before, after, and during the pipeline run. -3. **Execution Environment Logs**: Created at the orchestrator level when executing pipeline steps, usually implemented using Python's `logging` module. +1. **ZenML Server Logs**: Produced by the ZenML server, similar to any FastAPI server. +2. **Client or Runner Logs**: Generated during pipeline execution, capturing events before, during, and after the pipeline run. +3. **Execution Environment Logs**: Created at the orchestrator level during the execution of each pipeline step, typically using Python's `logging` module. -This documentation section focuses on how users can manage logging behavior across these environments. +This section outlines how users can manage logging behavior across these environments. ================================================== @@ -17366,21 +17328,18 @@ This documentation section focuses on how users can manage logging behavior acro # Data and Artifact Management in ZenML -This section outlines the management of data and artifacts within ZenML, focusing on key functionalities and practices. +This section outlines the management of data and artifacts within ZenML, focusing on key functionalities and processes. ## Key Concepts +- **Data Management**: Involves handling datasets throughout the machine learning lifecycle, ensuring data integrity and accessibility. +- **Artifact Management**: Refers to the storage and retrieval of artifacts generated during model training and evaluation, such as models, metrics, and visualizations. -- **Data Management**: Involves handling datasets used in machine learning workflows, including versioning, storage, and retrieval. -- **Artifact Management**: Refers to the management of outputs generated during the ML pipeline, such as models, metrics, and visualizations. - -## Important Features - -1. **Versioning**: ZenML supports version control for datasets and artifacts, ensuring reproducibility and traceability. -2. **Storage Backends**: Integrates with various storage solutions (e.g., S3, GCS) for data and artifact storage. -3. **Artifact Tracking**: Automatically tracks artifacts produced during pipeline execution, allowing for easy access and comparison. +## Core Features +- **Versioning**: ZenML supports version control for datasets and artifacts, allowing users to track changes and revert to previous states. +- **Storage Backends**: ZenML integrates with various storage solutions (e.g., S3, GCS) for efficient data and artifact storage. +- **Pipeline Integration**: Data and artifacts are seamlessly integrated into ZenML pipelines, enabling automated workflows. ## Example Code Snippet - ```python from zenml import pipeline @@ -17389,19 +17348,15 @@ def my_pipeline(): data = load_data() processed_data = preprocess(data) model = train_model(processed_data) - save_artifact(model) - -# Execute the pipeline -my_pipeline.run() + save_artifact(model, 'model.pkl') ``` ## Best Practices +- Regularly version datasets and artifacts to maintain a clear history. +- Choose appropriate storage backends based on project requirements. +- Use ZenML's built-in functions for loading and saving data and artifacts to ensure consistency. -- Use consistent naming conventions for datasets and artifacts. -- Regularly back up data and artifacts to prevent loss. -- Leverage ZenML's built-in tracking features to monitor changes and updates. - -This summary provides a concise overview of data and artifact management in ZenML, highlighting essential concepts, features, and practices. +This summary provides an overview of data and artifact management in ZenML, emphasizing essential features and practices for effective usage. ================================================== @@ -17421,7 +17376,7 @@ def my_pipeline(): ... ``` -This configuration prevents the visualization of artifacts during execution. +This configuration prevents visualizations from being generated for the specified step or pipeline. ================================================== @@ -17429,30 +17384,30 @@ This configuration prevents the visualization of artifacts during execution. # Creating Custom Visualizations in ZenML -ZenML allows you to create custom visualizations for artifacts using supported types: +ZenML allows you to associate custom visualizations with artifacts using supported types: -- **HTML**: Embedded HTML visualizations (e.g., data validation reports) -- **Image**: Visualizations of image data (e.g., Pillow images) -- **CSV**: Tables (e.g., pandas DataFrame output) -- **Markdown**: Markdown strings or pages -- **JSON**: JSON strings or objects +- **HTML:** Embedded HTML visualizations (e.g., data validation reports). +- **Image:** Visualizations of image data (e.g., Pillow images). +- **CSV:** Tables (e.g., pandas DataFrame output). +- **Markdown:** Markdown strings or pages. +- **JSON:** JSON strings or objects. ## Methods to Add Custom Visualizations -1. **Special Return Types**: If you have HTML, Markdown, CSV, or JSON data, cast them to a specific type in your step. -2. **Custom Materializers**: Define visualization logic for specific data types. -3. **Custom Return Type Class**: Create a custom return type and materializer for other visualizations. +1. **Special Return Types:** Cast HTML, Markdown, CSV, or JSON data to specific classes within your step. +2. **Custom Materializers:** Define visualization logic for specific data types by creating a custom materializer. +3. **Custom Return Type Class:** Create a custom return type with a corresponding materializer for other visualizations. ### Visualization via Special Return Types -To return HTML, Markdown, CSV, or JSON from a step, use the following types: +You can return visualizations by casting strings to specific types: - `zenml.types.HTMLString` - `zenml.types.MarkdownString` - `zenml.types.CSVString` - `zenml.types.JSONString` -#### Example: Returning CSV +#### Example: ```python from zenml.types import CSVString @@ -17462,7 +17417,9 @@ def my_step() -> CSVString: return CSVString("a,b,c\n1,2,3") ``` -#### Example: Returning Matplotlib Visualization +### Visualizing Matplotlib Plots + +To visualize a matplotlib plot: ```python import matplotlib.pyplot as plt @@ -17496,11 +17453,11 @@ if __name__ == "__main__": ## Visualization via Materializers -To visualize artifacts of a specific data type, override the `save_visualizations()` method in a custom materializer. +To visualize artifacts of a specific type, override the `save_visualizations()` method in a custom materializer. ### Example: Matplotlib Figure Visualization -1. **Custom Class**: +1. **Custom Class:** ```python from pydantic import BaseModel @@ -17509,7 +17466,7 @@ class MatplotlibVisualization(BaseModel): figure: Any ``` -2. **Materializer**: +2. **Materializer:** ```python class MatplotlibMaterializer(BaseMaterializer): @@ -17522,7 +17479,7 @@ class MatplotlibMaterializer(BaseMaterializer): return {visualization_path: VisualizationType.IMAGE} ``` -3. **Step**: +3. **Step:** ```python @step @@ -17535,13 +17492,12 @@ def create_matplotlib_visualization() -> MatplotlibVisualization: ### Workflow -When the step is used in a pipeline: -1. The step returns a `MatplotlibVisualization`. -2. ZenML calls the `MatplotlibMaterializer` to save the figure. +1. The step creates and returns a `MatplotlibVisualization`. +2. ZenML invokes the `MatplotlibMaterializer` to save visualizations. 3. The figure is saved as a PNG in the artifact store. -4. The dashboard displays the PNG. +4. The dashboard displays the PNG when viewing the artifact. -For more examples, refer to the Hugging Face datasets materializer documentation. +For further examples, refer to the Hugging Face datasets materializer in the ZenML repository. ================================================== @@ -17549,184 +17505,144 @@ For more examples, refer to the Hugging Face datasets materializer documentation ### Types of Visualizations in ZenML -ZenML automatically saves and displays visualizations of various data types in the ZenML dashboard. These visualizations can also be accessed in Jupyter notebooks using the `artifact.visualize()` method. - -**Key Visualizations Include:** -- **Statistical Representation:** Visualizes a Pandas DataFrame as a PNG image. -- **Drift Detection Reports:** Generated by tools like Evidently, Great Expectations, and whylogs. -- **Hugging Face Datasets Viewer:** Embedded as an HTML iframe. +ZenML automatically saves and displays visualizations for various data types in the ZenML dashboard. These visualizations can also be viewed in Jupyter notebooks using the `artifact.visualize()` method. -**Example Usage in Jupyter:** -```python -artifact.visualize() -``` +**Examples of Default Visualizations:** +- **Pandas DataFrame**: Statistical representation saved as a PNG image. +- **Drift Detection Reports**: Generated by tools like Evidently, Great Expectations, and whylogs. +- **Hugging Face Datasets Viewer**: Embedded as an HTML iframe. -Visualizations enhance data understanding and facilitate analysis within the ZenML framework. +Visualizations enhance data analysis and monitoring within ZenML workflows. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/visualizations-in-dashboard.md === -### Summary: Displaying Visualizations in the ZenML Dashboard +### Displaying Visualizations in the Dashboard -To display visualizations on the ZenML dashboard, the following steps are required: +To display visualizations on the ZenML dashboard, the following steps are necessary: -#### 1. Configuring a Service Connector -- Visualizations are stored in the artifact store. To display them, the ZenML server must have access to this store. -- Refer to the [service connector documentation](../../infrastructure-deployment/auth-management/README.md) for configuration details. -- For an example, see the [AWS S3 artifact store documentation](../../../component-guide/artifact-stores/s3.md). +#### Configuring a Service Connector +Visualizations are typically stored with artifacts in the artifact store. To view these visualizations, the ZenML server must have access to the artifact store. For detailed guidance, refer to the [service connector documentation](../../infrastructure-deployment/auth-management/README.md). For a specific example, see the [AWS S3 artifact store documentation](../../../component-guide/artifact-stores/s3.md). -**Note:** When using the default/local artifact store with a deployed ZenML, the server cannot access local files, resulting in no visualizations being displayed. Use a service connector with a remote artifact store to view visualizations. +**Important Note:** When using the default/local artifact store with a deployed ZenML, the server cannot access local files, resulting in visualizations not being displayed. To view visualizations, use a service connector with a remote artifact store. -#### 2. Configuring Artifact Stores -- If visualizations from a pipeline run are missing, check if the ZenML server has the necessary dependencies and permissions for the artifact store. -- Additional information can be found in the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores). - -This summary retains critical information and key points while ensuring conciseness. +#### Configuring Artifact Stores +If visualizations from a pipeline run do not appear on the dashboard, the ZenML server may lack the necessary dependencies or permissions for the artifact store. Refer to the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores) for further details. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/README.md === ---- icon: chart-scatter description: Configuring ZenML for data visualizations in the dashboard. --- +--- icon: chart-scatter description: Configuring ZenML for data visualizations in the dashboard. --- # Visualize Artifacts -ZenML allows easy association of data visualizations and artifacts. +ZenML allows easy association of visualizations with data and artifacts. -![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png) +![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png) -For further details, refer to the ZenML documentation. +For more information, refer to the ZenML documentation. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/registering-existing-data.md === -### Summary: Registering External Data as ZenML Artifacts - -This documentation explains how to register external data (folders or files) as ZenML artifacts for future use in machine learning workflows. - -#### Registering an Existing Folder as a ZenML Artifact -You can register an entire folder containing data as a ZenML artifact. The following code demonstrates how to create a folder with a file and register it: - -```python -import os -from uuid import uuid4 -from pathlib import Path -from zenml.client import Client -from zenml import register_artifact - -prefix = Client().active_stack.artifact_store.path -preexisting_folder = os.path.join(prefix, f"my_test_folder_{uuid4()}") -preexisting_file = os.path.join(preexisting_folder, "test_file.txt") - -os.mkdir(preexisting_folder) -with open(preexisting_file, "w") as f: - f.write("test") - -register_artifact(folder_or_file_uri=preexisting_folder, name="my_folder_artifact") - -temp_artifact_folder_path = Client().get_artifact_version(name_id_or_prefix="my_folder_artifact").load() -assert isinstance(temp_artifact_folder_path, Path) -assert os.path.isdir(temp_artifact_folder_path) -with open(os.path.join(temp_artifact_folder_path, "test_file.txt"), "r") as f: - assert f.read() == "test" -``` - -#### Registering an Existing File as a ZenML Artifact -Similarly, you can register a single file as a ZenML artifact: +### Summary of ZenML Artifact Registration Documentation -```python -import os -from uuid import uuid4 -from pathlib import Path -from zenml.client import Client -from zenml import register_artifact +This documentation outlines how to register external data as ZenML artifacts for future use, focusing on both folders and files, as well as managing model checkpoints during training with PyTorch Lightning. -prefix = Client().active_stack.artifact_store.path -preexisting_folder = os.path.join(prefix, f"my_test_folder_{uuid4()}") -preexisting_file = os.path.join(preexisting_folder, "test_file.txt") +#### Registering Existing Data -os.mkdir(preexisting_folder) -with open(preexisting_file, "w") as f: - f.write("test") +1. **Register Existing Folder as a ZenML Artifact**: + - You can register an entire folder containing data without needing to read or materialize the data. + - Example code: + ```python + import os + from uuid import uuid4 + from pathlib import Path + from zenml.client import Client + from zenml import register_artifact -register_artifact(folder_or_file_uri=preexisting_file, name="my_file_artifact") + prefix = Client().active_stack.artifact_store.path + folder_path = os.path.join(prefix, f"my_test_folder_{uuid4()}") + os.mkdir(folder_path) + with open(os.path.join(folder_path, "test_file.txt"), "w") as f: + f.write("test") -temp_artifact_file_path = Client().get_artifact_version(name_id_or_prefix="my_file_artifact").load() -assert isinstance(temp_artifact_file_path, Path) -assert not os.path.isdir(temp_artifact_file_path) -with open(temp_artifact_file_path, "r") as f: - assert f.read() == "test" -``` + register_artifact(folder_path, name="my_folder_artifact") + loaded_path = Client().get_artifact_version("my_folder_artifact").load() + assert os.path.isdir(loaded_path) + ``` -#### Registering Checkpoints from a PyTorch Lightning Training Run -You can register all checkpoints from a PyTorch Lightning training run as artifacts. The following code shows how to set up a trainer and register checkpoints: +2. **Register Existing File as a ZenML Artifact**: + - Similar to folders, individual files can also be registered. + - Example code: + ```python + import os + from uuid import uuid4 + from pathlib import Path + from zenml.client import Client + from zenml import register_artifact -```python -import os -from zenml.client import Client -from zenml import register_artifact -from pytorch_lightning import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint -from uuid import uuid4 + prefix = Client().active_stack.artifact_store.path + file_path = os.path.join(prefix, f"my_test_file_{uuid4()}.txt") + with open(file_path, "w") as f: + f.write("test") -prefix = Client().active_stack.artifact_store.path -default_root_dir = os.path.join(prefix, uuid4().hex) + register_artifact(file_path, name="my_file_artifact") + loaded_file_path = Client().get_artifact_version("my_file_artifact").load() + ``` -trainer = Trainer( - default_root_dir=default_root_dir, - callbacks=[ModelCheckpoint(every_n_epochs=1, save_top_k=-1, filename="checkpoint-{epoch:02d}")] -) -trainer.fit(model) +#### Registering Checkpoints in PyTorch Lightning -register_artifact(default_root_dir, name="all_my_model_checkpoints") -``` +1. **Register All Checkpoints**: + - During a training run, you can register all checkpoints created by PyTorch Lightning. + - Example code: + ```python + from zenml.client import Client + from zenml import register_artifact + from pytorch_lightning import Trainer + from pytorch_lightning.callbacks import ModelCheckpoint + from uuid import uuid4 -#### Custom ModelCheckpoint for ZenML -To register each checkpoint as a separate artifact version, extend the `ModelCheckpoint` class: + prefix = Client().active_stack.artifact_store.path + default_root_dir = os.path.join(prefix, uuid4().hex) + trainer = Trainer(default_root_dir=default_root_dir, callbacks=[ModelCheckpoint()]) + trainer.fit(model) + register_artifact(default_root_dir, name="all_my_model_checkpoints") + ``` -```python -from zenml.client import Client -from zenml import register_artifact -from zenml import get_step_context -from pytorch_lightning.callbacks import ModelCheckpoint +2. **Register Checkpoints as Separate Artifact Versions**: + - Extend the `ModelCheckpoint` callback to register each checkpoint as a separate artifact version. + - Example code: + ```python + from zenml import register_artifact + from pytorch_lightning.callbacks import ModelCheckpoint -class ZenMLModelCheckpoint(ModelCheckpoint): - def __init__(self, artifact_name: str, every_n_epochs: int = 1, save_top_k: int = -1, *args, **kwargs): - zenml_model = get_step_context().model - self.artifact_name = artifact_name - self.default_root_dir = os.path.join(Client().active_stack.artifact_store.path, str(zenml_model.version)) - super().__init__(every_n_epochs=every_n_epochs, save_top_k=save_top_k, *args, **kwargs) + class ZenMLModelCheckpoint(ModelCheckpoint): + def __init__(self, artifact_name, *args, **kwargs): + # Initialization code... + super().__init__(*args, **kwargs) - def on_train_epoch_end(self, trainer, pl_module): - super().on_train_epoch_end(trainer, pl_module) - register_artifact(os.path.join(self.dirpath, self.filename_format.format(epoch=trainer.current_epoch)), self.artifact_name) -``` + def on_train_epoch_end(self, trainer, pl_module): + super().on_train_epoch_end(trainer, pl_module) + register_artifact(os.path.join(self.dirpath, self.filename_format.format(epoch=trainer.current_epoch)), self.artifact_name) + ``` -#### Example Pipeline with Checkpoint Registration -An example pipeline demonstrates training a PyTorch Lightning model with checkpoint registration: +#### Example Pipeline with PyTorch Lightning +- A complete example of a pipeline that trains a model and registers checkpoints: ```python -@step -def get_data() -> DataLoader: - dataset = MNIST(os.getcwd(), download=True, transform=ToTensor()) - return DataLoader(dataset) - -@step -def train_model(model: LightningModule, train_loader: DataLoader, epochs: int = 1, artifact_name: str = "my_model_ckpts"): - chkpt_cb = ZenMLModelCheckpoint(artifact_name=artifact_name) - trainer = Trainer(default_root_dir=chkpt_cb.default_root_dir, max_epochs=epochs, callbacks=[chkpt_cb]) - trainer.fit(model, train_loader) - @pipeline(model=Model(name="LightningDemo")) def train_pipeline(artifact_name: str = "my_model_ckpts"): train_loader = get_data() model = get_model() train_model(model, train_loader, 10, artifact_name) + predict(get_pipeline_context().model.get_artifact(artifact_name), after=["train_model"]) ``` -This documentation provides a comprehensive guide on registering external data and managing artifacts in ZenML, particularly focusing on integration with PyTorch Lightning for model training and checkpoint management. +This documentation provides essential details on registering artifacts in ZenML, particularly focusing on external data and model checkpoints, ensuring that users can effectively manage their machine learning artifacts. ================================================== @@ -17734,18 +17650,20 @@ This documentation provides a comprehensive guide on registering external data a ### Structuring an MLOps Project -An MLOps project typically involves multiple pipelines, such as: +An MLOps project typically consists of multiple pipelines, including: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. -- **Inference Pipeline**: Runs batch predictions on the trained model, often requiring preprocessing from the training pipeline. -- **Deployment Pipeline**: Deploys the trained model to a production endpoint. +- **Inference Pipeline**: Runs batch predictions on trained models. +- **Deployment Pipeline**: Deploys trained models to production. -The structure of these pipelines can vary based on project requirements, and artifacts (models, datasets, metadata) often need to be shared between them. Below are common patterns for artifact exchange. +The structure of these pipelines can vary based on project requirements, and sharing artifacts (models, metadata) between them is essential. -#### Pattern 1: Artifact Exchange via `Client` +#### Artifact Exchange Patterns -In this pattern, a feature engineering pipeline produces datasets that are sent to a training pipeline using the ZenML Client. +**Pattern 1: Artifact Exchange via `Client`** + +In this pattern, the ZenML Client facilitates the exchange of artifacts between pipelines. For example, a feature engineering pipeline produces datasets that are used in a training pipeline. ```python from zenml import pipeline @@ -17753,6 +17671,7 @@ from zenml.client import Client @pipeline def feature_engineering_pipeline(): + dataset = load_data() train_data, test_data = prepare_data() @pipeline @@ -17764,11 +17683,11 @@ def training_pipeline(): model_evaluator(model, sklearn_classifier) ``` -**Note**: The `train_data` and `test_data` are references to stored data, not materialized in memory during the pipeline function. +*Note*: Artifacts are referenced, not materialized in memory during the pipeline function. -#### Pattern 2: Artifact Exchange via `Model` +**Pattern 2: Artifact Exchange via `Model`** -This pattern uses a ZenML Model as a reference point for artifacts. For instance, a training pipeline (`train_and_promote`) produces models, which are promoted based on accuracy. An inference pipeline (`do_predictions`) uses the latest promoted model without needing artifact IDs or names. +This pattern uses the ZenML Model as a reference point. For instance, a training pipeline (`train_and_promote`) produces models that are promoted based on accuracy. An inference pipeline (`do_predictions`) retrieves the latest promoted model without needing to know specific artifact IDs. ```python from zenml import step, get_step_context @@ -17780,7 +17699,7 @@ def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return predictions ``` -To avoid caching issues, you can disable caching in the step or resolve artifacts at the pipeline level: +If caching is enabled, it may lead to unexpected results. To avoid this, either disable caching or resolve artifacts at the pipeline level. ```python from zenml import get_pipeline_context, pipeline, Model @@ -17802,7 +17721,7 @@ if __name__ == "__main__": do_predictions() ``` -Both artifact exchange methods are valid; the choice depends on project needs and preferences. +Both artifact exchange patterns are valid; the choice depends on user preference and specific use cases. ================================================== @@ -17811,16 +17730,16 @@ Both artifact exchange methods are valid; the choice depends on project needs an # Custom Dataset Classes and Complex Data Flows in ZenML ## Overview -ZenML allows for efficient management of complex data flows in machine learning projects through custom Dataset classes and Materializers. This is particularly useful for handling multiple data sources and intricate data structures. +ZenML allows the encapsulation of data loading, processing, and saving logic through custom Dataset classes, facilitating the management of various data sources and complex data structures. ## Custom Dataset Classes -Custom Dataset classes encapsulate data loading, processing, and saving logic. They are beneficial when: -1. Working with various data sources (CSV, databases, cloud storage). +Custom Dataset classes are useful for: +1. Handling multiple data sources (e.g., CSV, databases). 2. Managing complex data structures. 3. Implementing custom data processing. ### Example Implementation -A base `Dataset` class can be extended for specific data sources like CSV and BigQuery: +A base `Dataset` class can be implemented for different data sources, such as CSV and BigQuery: ```python from abc import ABC, abstractmethod @@ -17836,49 +17755,37 @@ class Dataset(ABC): class CSVDataset(Dataset): def __init__(self, data_path: str, df: Optional[pd.DataFrame] = None): self.data_path = data_path - self.df = df - - def read_data(self) -> pd.DataFrame: - if self.df is None: - self.df = pd.read_csv(self.data_path) - return self.df + self.df = df or pd.read_csv(self.data_path) class BigQueryDataset(Dataset): def __init__(self, table_id: str, project: Optional[str] = None): self.table_id = table_id - self.project = project - self.client = bigquery.Client(project=self.project) + self.client = bigquery.Client(project=project) def read_data(self) -> pd.DataFrame: - query = f"SELECT * FROM `{self.table_id}`" - return self.client.query(query).to_dataframe() - + return self.client.query(f"SELECT * FROM `{self.table_id}`").to_dataframe() + def write_data(self) -> None: job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE") self.client.load_table_from_dataframe(self.df, self.table_id, job_config=job_config).result() ``` ## Custom Materializers -Materializers handle serialization and deserialization of artifacts. Custom Materializers are essential for custom Dataset classes. +Materializers handle serialization/deserialization of artifacts. Custom Materializers are crucial for custom Dataset classes. ### Example Materializers ```python from zenml.materializers import BaseMaterializer from zenml.io import fileio -from zenml.enums import ArtifactType import json -import os import tempfile class CSVDatasetMaterializer(BaseMaterializer): - ASSOCIATED_TYPES = (CSVDataset,) - ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA - def load(self, data_type: Type[CSVDataset]) -> CSVDataset: with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: with fileio.open(os.path.join(self.uri, "data.csv"), "rb") as source_file: temp_file.write(source_file.read()) - return CSVDataset(temp_file.name) + return CSVDataset(temp_file.name) def save(self, dataset: CSVDataset) -> None: df = dataset.read_data() @@ -17887,34 +17794,29 @@ class CSVDatasetMaterializer(BaseMaterializer): with open(temp_file.name, "rb") as source_file: with fileio.open(os.path.join(self.uri, "data.csv"), "wb") as target_file: target_file.write(source_file.read()) - os.remove(temp_file.name) class BigQueryDatasetMaterializer(BaseMaterializer): - ASSOCIATED_TYPES = (BigQueryDataset,) - ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA - def load(self, data_type: Type[BigQueryDataset]) -> BigQueryDataset: with fileio.open(os.path.join(self.uri, "metadata.json"), "r") as f: metadata = json.load(f) - return BigQueryDataset(metadata["table_id"], metadata["project"]) + return BigQueryDataset(table_id=metadata["table_id"], project=metadata["project"]) def save(self, bq_dataset: BigQueryDataset) -> None: metadata = {"table_id": bq_dataset.table_id, "project": bq_dataset.project} with fileio.open(os.path.join(self.uri, "metadata.json"), "w") as f: json.dump(metadata, f) - if bq_dataset.df is not None: - bq_dataset.write_data() + bq_dataset.write_data() ``` ## Pipeline Management -Design flexible pipelines to manage multiple data sources effectively. +Design flexible pipelines to handle multiple data sources: ### Example Pipeline ```python from zenml import step, pipeline @step(output_materializer=CSVDatasetMaterializer) -def extract_data_local(data_path: str = "data/raw_data.csv") -> CSVDataset: +def extract_data_local(data_path: str) -> CSVDataset: return CSVDataset(data_path) @step(output_materializer=BigQueryDatasetMaterializer) @@ -17923,38 +17825,38 @@ def extract_data_remote(table_id: str) -> BigQueryDataset: @step def transform(dataset: Dataset) -> pd.DataFrame: - df = dataset.read_data() - return df.copy() # Apply transformations here + return dataset.read_data().copy() # Apply transformations @pipeline -def etl_pipeline(mode: str = "develop"): +def etl_pipeline(mode: str): raw_data = extract_data_local() if mode == "develop" else extract_data_remote(table_id="project.dataset.raw_table") return transform(raw_data) ``` ## Best Practices -1. **Use a common base class**: This allows for consistent handling of different data sources. -2. **Create specialized steps**: Implement separate steps for loading different datasets while keeping the underlying steps standardized. -3. **Implement flexible pipelines**: Use configuration parameters to adapt to different data sources. -4. **Modular step design**: Create steps that perform specific tasks to promote code reuse and maintenance. +1. **Common Base Class**: Use the `Dataset` base class for consistent handling of data sources. +2. **Specialized Steps**: Create separate steps for loading different datasets. +3. **Flexible Pipelines**: Use configuration parameters or conditional logic to adapt to data sources. +4. **Modular Design**: Create steps for specific tasks to promote code reuse and maintenance. -By following these practices, you can build adaptable ZenML pipelines that efficiently manage complex data flows and multiple data sources. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). +By following these practices, ZenML pipelines can efficiently manage complex data flows and multiple data sources, ensuring flexibility as project requirements evolve. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md === -### Summary: Scaling Strategies for Big Data in ZenML +# Scaling Strategies for Big Data in ZenML -This documentation outlines strategies for managing large datasets in ZenML, emphasizing the need for different approaches based on dataset size. +## Overview +This documentation outlines strategies for managing large datasets in ZenML, focusing on scaling pipelines as data sizes increase. It categorizes datasets into three sizes: small, medium, and large, and provides specific strategies for each category. -#### Dataset Size Thresholds: +## Dataset Size Thresholds 1. **Small datasets (up to a few GB)**: Handled in-memory with pandas. 2. **Medium datasets (up to tens of GB)**: Require chunking or out-of-core processing. 3. **Large datasets (hundreds of GB or more)**: Necessitate distributed processing frameworks. -#### Strategies for Small Datasets: -1. **Efficient Data Formats**: Use formats like Parquet for better performance. +## Strategies for Small Datasets +1. **Efficient Data Formats**: Use formats like Parquet instead of CSV. ```python import pyarrow.parquet as pq @@ -17963,7 +17865,7 @@ This documentation outlines strategies for managing large datasets in ZenML, emp return pq.read_table(self.data_path).to_pandas() ``` -2. **Data Sampling**: Implement sampling in Dataset classes. +2. **Data Sampling**: Implement sampling methods. ```python class SampleableDataset(Dataset): def sample_data(self, fraction: float = 0.1) -> pd.DataFrame: @@ -17978,97 +17880,108 @@ This documentation outlines strategies for managing large datasets in ZenML, emp return df ``` -#### Strategies for Medium Datasets: -1. **Chunking for CSV Datasets**: Process large files in chunks. - ```python - class ChunkedCSVDataset(Dataset): - def read_data(self): - for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): - yield chunk - ``` +## Strategies for Medium Datasets +### Chunking for CSV Datasets +Implement chunking to process large files. +```python +class ChunkedCSVDataset(Dataset): + def read_data(self): + for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): + yield chunk + +@step +def process_chunked_csv(dataset: ChunkedCSVDataset) -> pd.DataFrame: + return pd.concat([process_chunk(chunk) for chunk in dataset.read_data()]) +``` + +### Leveraging Data Warehouses +Utilize data warehouses like Google BigQuery for distributed processing. +```python +@step +def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: + client = bigquery.Client() + query = "SELECT column1, AVG(column2) as avg_column2 FROM `{dataset.table_id}` GROUP BY column1" + query_job = client.query(query) + query_job.result() + return BigQueryDataset(table_id=result_table_id) +``` -2. **Data Warehouses**: Use services like Google BigQuery for distributed processing. +## Strategies for Large Datasets +### Using Distributed Computing Frameworks +1. **Apache Spark**: ```python + from pyspark.sql import SparkSession + @step - def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: - client = bigquery.Client() - query = "SELECT column1, AVG(column2) as avg_column2 FROM `{dataset.table_id}` GROUP BY column1" - query_job = client.query(query) - query_job.result() + def process_with_spark(input_data: str) -> None: + spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() + df = spark.read.csv(input_data, header=True) + df.groupBy("column1").agg({"column2": "mean"}).write.csv("output_path") + spark.stop() ``` -#### Strategies for Very Large Datasets: -1. **Distributed Computing Frameworks**: Use Apache Spark or Ray for large datasets. - - **Apache Spark**: - ```python - from pyspark.sql import SparkSession - - @step - def process_with_spark(input_data: str) -> None: - spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() - df = spark.read.csv(input_data, header=True) - df.groupBy("column1").agg({"column2": "mean"}).write.csv("output_path") - spark.stop() - ``` - - - **Ray**: - ```python - import ray +2. **Ray**: + ```python + import ray - @step - def process_with_ray(input_data: str) -> None: - ray.init() - # Define remote processing function - results = ray.get([process_partition.remote(part) for part in partitions]) - ray.shutdown() - ``` + @step + def process_with_ray(input_data: str) -> None: + ray.init() + results = ray.get([process_partition.remote(part) for part in split_data(load_data(input_data))]) + save_results(combine_results(results), "output_path") + ray.shutdown() + ``` -2. **Dask**: Integrate Dask for parallel computing. +3. **Dask**: ```python import dask.dataframe as dd @step def create_dask_dataframe(): - return dd.from_pandas(pd.DataFrame({'A': range(1000)}), npartitions=4) + return dd.from_pandas(pd.DataFrame({'A': range(1000), 'B': range(1000, 2000)}), npartitions=4) + + @step + def process_dask_dataframe(df: dd.DataFrame) -> dd.DataFrame: + return df.map_partitions(lambda x: x ** 2) ``` -3. **Numba**: Use Numba for JIT compilation to speed up numerical computations. +4. **Numba**: ```python from numba import jit @jit(nopython=True) def numba_function(x): return x * x + 2 * x - 1 + + @step + def apply_numba_function(data: np.ndarray) -> np.ndarray: + return numba_function(data) ``` -#### Important Considerations: +## Important Considerations - **Environment Setup**: Ensure necessary frameworks are installed. -- **Resource Management**: Coordinate resource allocation with ZenML orchestration. -- **Error Handling**: Implement cleanup for frameworks like Spark and Ray. +- **Resource Management**: Coordinate resource allocation between ZenML and the frameworks. +- **Error Handling**: Implement error handling for resource cleanup. - **Data I/O**: Use intermediate storage for large datasets. - **Scaling**: Ensure infrastructure supports the scale of computation. -#### Choosing the Right Scaling Strategy: -- Base decisions on dataset size, processing complexity, infrastructure, update frequency, and team expertise. Start simple and scale as needed. - -By implementing these strategies, ZenML pipelines can efficiently handle datasets of varying sizes, maintaining manageable machine learning workflows. For further details on custom Dataset classes, refer to the documentation on [custom dataset classes](datasets.md). +## Choosing the Right Strategy +Consider dataset size, processing complexity, infrastructure, update frequency, and team expertise when selecting a scaling strategy. Start with simpler solutions and scale as needed. ZenML's architecture allows for evolving data processing strategies as projects grow. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md === -### Summary of Documentation on Unmaterialized Artifacts in ZenML +### Summary of Unmaterialized Artifacts in ZenML -**Overview**: ZenML pipelines are structured around data-centric processes where each step reads and writes artifacts to an artifact store. Materializers manage the serialization and deserialization of these artifacts. However, there are scenarios where you may want to skip materialization and use references to artifacts instead. +**Overview**: In ZenML, a pipeline is structured around steps that read and write artifacts to an artifact store. Materializers manage the serialization and deserialization of these artifacts. However, there are scenarios where you may want to skip materialization and use a reference to the artifact instead. -**Warning**: Skipping materialization can lead to unintended consequences for downstream tasks. It should only be done when necessary. +**Warning**: Skipping materialization can lead to unintended consequences for downstream tasks that depend on materialized artifacts. Use this approach only when necessary. ### Skipping Materialization -- **Unmaterialized Artifact**: This is represented by `zenml.materializers.UnmaterializedArtifact`, which includes a `uri` property that points to the artifact's storage path. -- To use an unmaterialized artifact in a step, specify `UnmaterializedArtifact` as the type. +To use an unmaterialized artifact, import `UnmaterializedArtifact` and specify it as the type in your step: -**Example Code**: ```python from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import step @@ -18078,19 +17991,10 @@ def my_step(my_artifact: UnmaterializedArtifact): pass ``` -### Code Example of Pipeline with Unmaterialized Artifacts - -The following pipeline structure illustrates the use of unmaterialized artifacts: - -``` -s1 -> s3 -s2 -> s4 -``` +### Example Pipeline -- `s1` and `s2` produce identical artifacts. -- `s3` consumes materialized artifacts, while `s4` consumes unmaterialized artifacts, allowing direct access to `dict_.uri` and `list_.uri`. +The following example demonstrates how to implement unmaterialized artifacts in a pipeline: -**Pipeline Code**: ```python from typing_extensions import Annotated from typing import Dict, List, Tuple @@ -18123,31 +18027,35 @@ def example_pipeline(): example_pipeline() ``` -For further details on using `UnmaterializedArtifact`, refer to the documentation on triggering pipelines from another pipeline. +### Key Points +- **UnmaterializedArtifact**: Allows access to the artifact's unique storage path via the `uri` property. +- **Pipeline Structure**: Steps can produce artifacts that are either materialized or unmaterialized, enabling flexibility in how artifacts are consumed. + +For further details, refer to the ZenML documentation on [data artifact management](../../../how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/README.md === -It seems that you provided a directive without accompanying documentation text to summarize. Please provide the specific documentation content you would like summarized, and I will be happy to assist you! +It appears that the documentation text you intended to provide is missing. Please share the text you would like summarized, and I will be happy to assist you! ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md === -# Summary of ZenML Artifact Loading Documentation +### Summary of ZenML Artifact Management -ZenML pipelines typically consume artifacts produced by one another, but external data from non-ZenML sources may also be needed. For external artifacts, use the `ExternalArtifact` class. However, for exchanging data between ZenML pipelines, late materialization is essential, as pipelines are compiled before execution. This allows for passing not-yet-existing artifacts and their metadata, which is vital in multi-pipeline scenarios. +ZenML allows pipeline steps to consume artifacts produced by other steps or external sources. For external artifacts, use `ExternalArtifact`. When exchanging data between ZenML pipelines, late materialization is essential, as pipelines are compiled before execution, fixing input parameters. This allows for passing artifacts that may not yet exist. -### Key Use Cases for Artifact Exchange: +#### Key Use Cases: 1. Grouping data products using ZenML Models. 2. Using the ZenML Client to manage artifacts. -**Recommendation:** Utilize models to group and access artifacts across pipelines. Refer to the documentation on loading artifacts from a ZenML Model for more details. +**Recommendation:** Use models for grouping and accessing artifacts across pipelines. -## Exchanging Artifacts with Client Methods +### Exchanging Artifacts with Client Methods -If the Model Control Plane is not in use, late materialization can still facilitate data exchange between pipelines. Below is an example of a modified `do_predictions` pipeline: +If not using the Model Control Plane, you can still exchange data with late materialization. Below is an example of a modified `do_predictions` pipeline: ```python from typing import Annotated @@ -18163,6 +18071,7 @@ def predict(model1: ClassifierMixin, model2: ClassifierMixin, model1_metric: flo @step def load_data() -> pd.DataFrame: + # load inference data ... @pipeline @@ -18171,7 +18080,6 @@ def do_predictions(): metric_42 = model_42.run_metadata["MSE"].value model_latest = Client().get_artifact_version("trained_model") metric_latest = model_latest.run_metadata["MSE"].value - inference_data = load_data() predict(model1=model_42, model2=model_latest, model1_metric=metric_42, model2_metric=metric_latest, data=inference_data) @@ -18179,22 +18087,25 @@ if __name__ == "__main__": do_predictions() ``` -### Explanation: -- The `predict` step compares models based on their MSE metrics and makes predictions using the better-performing model. +### Explanation of Code: +- The `predict` step compares two models based on their MSE metrics and returns predictions from the better-performing model. - The `load_data` step is responsible for loading inference data. -- Artifact retrieval (e.g., `Client().get_artifact_version(...)`) occurs at execution time, ensuring the latest versions are used. +- The `do_predictions` pipeline retrieves specific and latest artifact versions, ensuring that the latest data is used at execution time, not compilation time. -This approach allows for dynamic artifact management and ensures that the most current data is utilized during pipeline execution. +This approach ensures that the most relevant model is used for predictions, enhancing the pipeline's effectiveness. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/get-arbitrary-artifacts-in-a-step.md === -### Summary of Documentation on Fetching Artifacts in Steps +### Summary -Artifacts do not need to originate solely from direct upstream steps. You can fetch artifacts from other upstream steps or different pipelines using the ZenML client within a step. +Artifacts in ZenML can be accessed not only from direct upstream steps but also from other pipelines. This is facilitated by the ZenML client, which allows fetching of metadata and artifacts. + +#### Key Points: +- Artifacts can be retrieved using the ZenML client, enabling access to artifacts from various sources. +- The following code snippet demonstrates how to fetch an artifact within a step: -#### Key Code Example: ```python from zenml.client import Client from zenml import step @@ -18206,117 +18117,123 @@ def my_step(): accuracy = output.run_metadata["accuracy"].value ``` -This code demonstrates how to retrieve an artifact that has already been created and stored in the artifact store, enabling the use of artifacts from various sources. +- This method is beneficial for utilizing pre-existing artifacts stored in the artifact store, regardless of their origin. #### Additional Resources: -- [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md): Learn about the `ExternalArtifact` type and artifact transfer between steps. +- For more on managing artifacts, refer to the [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md) documentation, which includes information on the `ExternalArtifact` type and inter-step artifact passing. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md === -### Summary: Using Materializers in ZenML for Custom Data Types +### Summary of Using Materializers in ZenML -#### Overview -ZenML pipelines are data-centric, where steps are connected through their inputs and outputs. Materializers are crucial for managing how artifacts (data outputs) are serialized and deserialized when passed between steps and stored in the artifact store. +**Overview**: ZenML pipelines are data-centric, where steps read and write artifacts to an artifact store. **Materializers** manage how artifacts are serialized, deserialized, and stored. #### Built-In Materializers -ZenML provides several built-in materializers for common data types, automatically enabled without user interaction: - -| Materializer | Handled Data Types | Storage Format | -|--------------|---------------------|----------------| -| BuiltInMaterializer | `bool`, `float`, `int`, `str`, `None` | `.json` | -| BytesMaterializer | `bytes` | `.txt` | -| BuiltInContainerMaterializer | `dict`, `list`, `set`, `tuple` | Directory | -| NumpyMaterializer | `np.ndarray` | `.npy` | -| PandasMaterializer | `pd.DataFrame`, `pd.Series` | `.csv` (or `.gzip` if `parquet` is installed) | -| PydanticMaterializer | `pydantic.BaseModel` | `.json` | -| ServiceMaterializer | `zenml.services.service.BaseService` | `.json` | -| StructuredStringMaterializer | `zenml.types.CSVString`, `zenml.types.HTMLString`, `zenml.types.MarkdownString` | `.csv`, `.html`, `.md` | - -**Warning:** The `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions. +ZenML provides several built-in materializers for common data types, automatically enabled: +- **BuiltInMaterializer**: Handles `bool`, `float`, `int`, `str`, `None` - Storage: `.json` +- **BytesMaterializer**: Handles `bytes` - Storage: `.txt` +- **BuiltInContainerMaterializer**: Handles `dict`, `list`, `set`, `tuple` - Storage: Directory +- **NumpyMaterializer**: Handles `np.ndarray` - Storage: `.npy` +- **PandasMaterializer**: Handles `pd.DataFrame`, `pd.Series` - Storage: `.csv` or `.gzip` +- **PydanticMaterializer**: Handles `pydantic.BaseModel` - Storage: `.json` +- **ServiceMaterializer**: Handles `zenml.services.service.BaseService` - Storage: `.json` +- **StructuredStringMaterializer**: Handles `zenml.types.CSVString`, `HTMLString`, `MarkdownString` - Storage: `.csv`, `.html`, `.md` + +**Warning**: The `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions. #### Integration Materializers -ZenML also supports integration-specific materializers, which can be activated by installing the respective integration. Examples include: - -- **BentoMaterializer** for `bentoml.Bento` (`.bento`) -- **DeepchecksResultMaterializer** for `deepchecks.CheckResult` (`.json`) -- **LightGBMBoosterMaterializer** for `lgbm.Booster` (`.txt`) +ZenML also offers integration-specific materializers activated by installing respective integrations. Examples include: +- **BentoMaterializer** for `bentoml.Bento` - Storage: `.bento` +- **DeepchecksResultMaterializer** for `deepchecks.CheckResult` - Storage: `.json` +- **LightGBMBoosterMaterializer** for `lgbm.Booster` - Storage: `.txt` -**Note:** For Docker-based orchestrators, specify required integrations in `DockerSettings`. +**Note**: For Docker-based orchestrators, specify required integrations in `DockerSettings`. #### Custom Materializers To create a custom materializer: -1. **Define the Materializer:** +1. **Define the Materializer**: - Subclass `BaseMaterializer`. - Set `ASSOCIATED_TYPES` and `ASSOCIATED_ARTIFACT_TYPE`. -```python -class MyMaterializer(BaseMaterializer): - ASSOCIATED_TYPES = (MyObj,) - ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA + ```python + class MyMaterializer(BaseMaterializer): + ASSOCIATED_TYPES = (MyObj,) + ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA - def load(self, data_type: Type[MyObj]) -> MyObj: - # Load logic - ... + def load(self, data_type: Type[MyObj]) -> MyObj: + # Load logic + ... - def save(self, my_obj: MyObj) -> None: - # Save logic - ... -``` + def save(self, my_obj: MyObj) -> None: + # Save logic + ... + ``` -2. **Configure Steps:** - - Use the `@step` decorator or `.configure()` method to specify the materializer. +2. **Configure Steps**: + - Use `@step(output_materializers=MyMaterializer)` or `.configure()` method. -```python -@step(output_materializers=MyMaterializer) -def my_first_step() -> MyObj: - return MyObj("my_object") -``` + ```python + @step(output_materializers=MyMaterializer) + def my_first_step() -> MyObj: + return MyObj("my_object") + ``` -3. **Global Materializer Registration:** - - Override default materializers globally using the materializer registry. +3. **Global Materializer**: To apply a custom materializer globally, register it in the materializer registry. -```python -materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) -``` + ```python + materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) + ``` -#### Implementing Load and Save Methods -The `load()` method retrieves and deserializes data, while the `save()` method serializes and stores data in the artifact store. +#### Example Implementation +Here’s a simple example of using a custom materializer: ```python -def load(self, data_type: Type[MyObj]) -> MyObj: - with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: - name = f.read() - return MyObj(name=name) +class MyObj: + def __init__(self, name: str): + self.name = name -def save(self, my_obj: MyObj) -> None: - with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: - f.write(my_obj.name) -``` +class MyMaterializer(BaseMaterializer): + ASSOCIATED_TYPES = (MyObj,) + ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA -#### Example Pipeline -Here’s a basic example of using a custom materializer in a pipeline: + def load(self, data_type: Type[MyObj]) -> MyObj: + with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: + return MyObj(f.read()) + + def save(self, my_obj: MyObj) -> None: + with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: + f.write(my_obj.name) -```python @step def my_first_step() -> MyObj: return MyObj("my_object") @step def my_second_step(my_obj: MyObj) -> None: - logging.info(f"The following object was passed to this step: `{my_obj.name}`") + logging.info(f"Object passed: {my_obj.name}") @pipeline def first_pipeline(): output_1 = my_first_step() my_second_step(output_1) -my_first_step.configure(output_materializers=MyMaterializer) first_pipeline() ``` -This setup ensures that the custom materializer handles the serialization and deserialization of `MyObj`, making the pipeline robust and production-ready. +This example demonstrates the creation of a custom materializer for a class `MyObj`, allowing it to be passed between pipeline steps without warnings. + +#### Additional Features +- **Visualizations**: Override `save_visualizations()` to save visual representations of artifacts. +- **Metadata Extraction**: Override `extract_metadata()` to track custom metadata alongside artifacts. + +#### Important Considerations +- Ensure compatibility with custom artifact stores. +- Use `get_temporary_directory()` for temporary directories in materializers. +- Disable artifact visualization or metadata extraction if not needed. + +This summary provides a concise overview of using materializers in ZenML, covering built-in options, custom implementations, and integration specifics. ================================================== @@ -18324,17 +18241,17 @@ This setup ensures that the custom materializer handles the serialization and de ### Deleting Artifacts in ZenML -Artifacts cannot be deleted directly to avoid breaking the ZenML database due to dangling references. However, you can delete artifacts that are no longer referenced by any pipeline runs using the following command: +Artifacts cannot be deleted directly to avoid breaking the ZenML database with dangling references. However, you can delete artifacts that are no longer referenced by any pipeline runs using the following command: ```shell zenml artifact prune ``` -By default, this command removes artifacts from the artifact store and the database. You can modify this behavior with the flags: +By default, this command removes artifacts from the underlying artifact store and deletes their database entries. You can modify this behavior with the following flags: - `--only-artifact`: Deletes only the artifact. -- `--only-metadata`: Deletes only the metadata entry. +- `--only-metadata`: Deletes only the database entry. -If you encounter errors while pruning artifacts (often due to local storage issues), you can use the `--ignore-errors` flag to proceed with pruning, although warnings will still be displayed. +If you encounter errors while pruning (often due to locally stored artifacts that no longer exist), you can use the `--ignore-errors` flag to continue the process, though warning messages will still be displayed. ================================================== @@ -18342,23 +18259,27 @@ If you encounter errors while pruning artifacts (often due to local storage issu ### ZenML Data Storage Overview -ZenML integrates data versioning and lineage tracking into its core functionality. When a pipeline runs, it automatically tracks and manages artifacts, allowing users to view the lineage of artifact creation and interact with them via a dashboard. This functionality enhances insights, streamlines experimentation, and ensures reproducibility in machine learning workflows. +ZenML integrates data versioning and lineage tracking into its core functionality, automatically managing artifacts generated during pipeline executions. Users can view the lineage of artifacts and interact with them via a dashboard, enhancing insights, reproducibility, and reliability in machine learning workflows. #### Artifact Creation and Caching -During a pipeline execution, ZenML checks for changes in inputs, outputs, parameters, or configurations. Each step generates a new directory in the artifact store. If a step is new or modified, ZenML creates a unique directory structure with a unique ID and stores the data using the appropriate materializers. If unchanged, ZenML may cache the step, saving time and computational resources. +- Each pipeline run checks for changes in inputs, outputs, parameters, or configurations. +- New or modified steps create a unique directory in the [Artifact Store](../../../component-guide/artifact-stores/artifact-stores.md) with a unique ID. +- Unchanged steps may be cached, saving time and computational resources, allowing focus on new configurations. +- ZenML provides traceability of artifacts back to their origins, crucial for identifying issues and ensuring reproducibility, especially in team environments. -This caching mechanism allows users to focus on experimenting with different configurations without rerunning unchanged pipeline parts. ZenML provides transparency in tracing artifacts back to their origins, ensuring reproducibility and identifying potential issues in pipelines. +For detailed artifact management, see [artifact versioning and configuration](../../../user-guide/starter-guide/manage-artifacts.md). -For artifact versioning and configuration adjustments, refer to the [artifact management documentation](../../../user-guide/starter-guide/manage-artifacts.md). +#### Materializers -#### Saving and Loading Artifacts with Materializers +Materializers are essential for artifact management, handling serialization and deserialization of artifacts. They store data in unique directories within the artifact store. -Materializers handle the serialization and deserialization of artifacts in ZenML. They store data in unique directories within the artifact store. ZenML offers built-in materializers for common data types and uses `cloudpickle` for objects without a default materializer. Custom materializers can be created by extending the `BaseMaterializer` class. +- ZenML includes built-in materializers for common data types and uses `cloudpickle` for objects without a default materializer. +- Custom materializers can be created by extending the `BaseMaterializer` class. -**Warning:** The built-in `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across different Python versions. It may also pose security risks by allowing the upload of malicious files. For robust solutions, consider implementing custom materializers. +**Warning:** The built-in [CloudpickleMaterializer](https://sdkdocs.zenml.io/latest/core_code_docs/core-materializers/#zenml.materializers.cloudpickle_materializer.CloudpickleMaterializer) is not production-ready due to potential compatibility issues across Python versions and security risks from malicious file uploads. Custom materializers are recommended for robust use cases. -ZenML utilizes the `fileio` system for saving and loading artifacts, simplifying interactions with various data formats and enabling artifact caching and lineage tracking. An example of a default materializer, such as the `numpy` materializer, can be found [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). +During pipeline execution, ZenML employs materializers to manage artifact saving and loading through the `fileio` system, facilitating artifact caching and lineage tracking. An example of a default materializer, the `numpy` materializer, can be found [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). ================================================== @@ -18368,10 +18289,8 @@ ZenML utilizes the `fileio` system for saving and loading artifacts, simplifying The `Annotated` type in ZenML allows you to return multiple outputs from a step with specific names, enhancing artifact retrieval and dashboard readability. -#### Key Points: -- **Functionality**: Use `Annotated` to name outputs for easier access and improved clarity in the pipeline dashboard. - -#### Example Code: +#### Code Example + ```python from typing import Annotated, Tuple import pandas as pd @@ -18390,22 +18309,26 @@ def clean_data(data: pd.DataFrame) -> Tuple[ return train_test_split(x, y, test_size=0.2, random_state=42) ``` -#### Explanation: -- The `clean_data` step takes a DataFrame and returns a tuple containing training and testing sets for features (`x_train`, `x_test`) and target (`y_train`, `y_test`). -- Each output is annotated for easy identification in the pipeline, aiding in both retrieval and visualization on the dashboard. +#### Key Points +- The `clean_data` function takes a pandas DataFrame and returns a tuple containing training and testing datasets for features (`x_train`, `x_test`) and target (`y_train`, `y_test`). +- Each output is annotated with a name using `Annotated`, facilitating easy identification and retrieval in the pipeline. +- The function uses `train_test_split` from scikit-learn to perform the data split. + +This approach improves the clarity of the pipeline's dashboard by displaying named outputs. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/tagging.md === -### Summary of ZenML Tagging Documentation +### Organizing Data with Tags in ZenML -**Overview**: ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow efficiency and asset discoverability. +ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow efficiency and asset discoverability. This guide covers how to assign tags to artifacts and models. #### Assigning Tags to Artifacts -- Use the `tags` property in `ArtifactConfig` to assign tags to artifacts created by steps or pipelines. -**Python SDK Example**: +To tag artifact versions in a step or pipeline, use the `tags` property of `ArtifactConfig`: + +**Python SDK Example:** ```python from zenml import step, ArtifactConfig @@ -18416,24 +18339,29 @@ def training_data_loader() -> ( ... ``` -**CLI Example**: +**CLI Example:** ```shell -# Tag an artifact +# Tag the artifact zenml artifacts update iris_dataset -t sklearn -# Tag a specific artifact version +# Tag the artifact version zenml artifacts versions update iris_dataset raw_2023 -t sklearn ``` -- Tags like `sklearn` and `pre-training` will be applied to all artifacts from the step. + +Tags like `sklearn` and `pre-training` will be assigned to all artifacts created by the step. ZenML Pro users can tag artifacts directly in the cloud dashboard. #### Assigning Tags to Models -- Models can also be tagged for better organization. Tags can be specified as key-value pairs when creating a model version. -**Python SDK Example**: +Models can also be tagged for semantic organization. Tags can be added as key-value pairs when creating a model version: + +**Python SDK Example:** ```python from zenml.models import Model +# Define tags tags = ["experiment", "v1", "classification-task"] + +# Create a model version with tags model = Model(name="iris_classifier", version="1.0.0", tags=tags) @pipeline(model=model) @@ -18441,15 +18369,19 @@ def my_pipeline(...): ... ``` -- To create or register models with tags: +You can also create or register models and their versions with tags: + ```python from zenml.client import Client +# Create a new model with tags Client().create_model(name="iris_logistic_regression", tags=["classification", "iris-dataset"]) + +# Create a new model version with tags Client().create_model_version(model_name_or_id="iris_logistic_regression", name="2", tags=["version-1", "experiment-42"]) ``` -**CLI Example**: +**CLI Example for Existing Models:** ```shell # Tag an existing model zenml model update iris_logistic_regression --tag "classification" @@ -18459,9 +18391,8 @@ zenml model version update iris_logistic_regression 2 --tag "experiment3" ``` #### Important Notes -- Tags enhance the organization of both artifacts and models. -- ZenML Pro users can tag artifacts directly in the cloud dashboard. -- Models created implicitly during pipeline runs do not inherit tags from the `Model` class; tags must be managed separately. +- During a pipeline run, a model may be implicitly created without tags from the `Model` class. +- Tags can be manipulated using the SDK or the ZenML Pro UI. ================================================== @@ -18469,32 +18400,35 @@ zenml model version update iris_logistic_regression 2 --tag "experiment3" ### ZenML Artifact Naming Overview -In ZenML pipelines, naming artifacts is crucial for tracking and reusing outputs, especially when steps are repeated with different inputs. ZenML supports both static and dynamic naming strategies for output artifacts, utilizing type annotations to determine names. Artifacts with the same name receive incremented version numbers. +In ZenML pipelines, managing artifact names is crucial for tracking outputs, especially when steps are reused with different inputs. ZenML allows both static and dynamic naming strategies for output artifacts, utilizing type annotations to determine names. Artifacts with the same name receive incremented version numbers. #### Naming Strategies -1. **Static Naming** - - Defined using string literals. +1. **Static Naming**: Defined as string literals. ```python @step def static_single() -> Annotated[str, "static_output_name"]: return "null" ``` -2. **Dynamic Naming** - - **Using Standard Placeholders**: Automatically replaced by ZenML. +2. **Dynamic Naming**: Generated at runtime using string templates. + - **Standard Placeholders**: + - `{date}`: Current date (e.g., `2024_11_18`) + - `{time}`: Current time (e.g., `11_07_09_326492`) ```python @step def dynamic_single_string() -> Annotated[str, "name_{date}_{time}"]: return "null" ``` - - **Using Custom Placeholders**: Defined via the `substitutions` parameter. + + - **Custom Placeholders**: Defined via the `substitutions` parameter. ```python @step(substitutions={"custom_placeholder": "some_substitute"}) def dynamic_single_string() -> Annotated[str, "name_{custom_placeholder}_{time}"]: return "null" ``` - - **Using `with_options`**: Redefine placeholders dynamically. + + - **Using `with_options`**: ```python @step def extract_data(source: str) -> Annotated[str, "{stage}_dataset"]: @@ -18506,12 +18440,10 @@ In ZenML pipelines, naming artifacts is crucial for tracking and reusing outputs extract_data.with_options(substitutions={"stage": "test"})(source="s3://test") ``` - **Standard Substitutions**: - - `{date}`: Current date (e.g., `2024_11_27`) - - `{time}`: Current time in UTC (e.g., `11_07_09_326492`) + **Substitution Scope**: + - Set in `@pipeline`, `pipeline.with_options`, `@step`, or `step.with_options`. -3. **Multiple Output Handling** - - Combine naming options for multiple artifacts. +3. **Multiple Output Handling**: Combine naming strategies for multiple artifacts. ```python @step def mixed_tuple() -> Tuple[ @@ -18522,8 +18454,8 @@ In ZenML pipelines, naming artifacts is crucial for tracking and reusing outputs ``` #### Caching Behavior -When caching is enabled, the names of output artifacts remain consistent with the original run, even if the step is executed again. +When caching is enabled, artifact names remain consistent across runs, even for dynamic names. Example: ```python @step(substitutions={"custom_placeholder": "resolution"}) def demo() -> Tuple[ @@ -18541,12 +18473,11 @@ if __name__ == "__main__": run_with_cache = my_pipeline.with_options(enable_cache=True)() ``` -**Output Example**: -``` -['name_2024_11_21_14_27_33_750134', 'name_resolution'] -``` +Both runs will produce consistent output artifact names, demonstrating the caching mechanism. -This summary provides a concise overview of how to effectively name artifacts in ZenML, highlighting key strategies and examples for both static and dynamic naming, as well as the behavior of cached runs. +### Summary + +ZenML provides flexible artifact naming strategies through static and dynamic methods, leveraging placeholders and substitutions. This allows for clear tracking of artifacts across multiple pipeline runs, especially when caching is utilized. ================================================== @@ -18554,9 +18485,9 @@ This summary provides a concise overview of how to effectively name artifacts in ### Summary of ZenML Step Outputs and Pipeline -In ZenML, step outputs are stored in an artifact store, enabling caching, lineage, and auditability. Using type annotations for outputs enhances transparency, facilitates data passing between steps, and allows ZenML to serialize/deserialize data (termed 'materialize'). +In ZenML, step outputs are stored in an artifact store, facilitating caching, lineage, and auditability. Using type annotations for outputs enhances transparency, aids in data transfer between steps, and allows for serialization/deserialization (termed 'materialize'). -#### Code Overview +#### Key Steps and Pipeline Definition ```python @step @@ -18574,16 +18505,15 @@ def train_model(data: Dict[str, Any]) -> None: @pipeline def simple_ml_pipeline(parameter: int): - dataset = load_data(parameter) + dataset = load_data(parameter=parameter) train_model(dataset) ``` -#### Key Points: -- **Steps**: - - `load_data`: Accepts an integer and returns a dictionary with features and labels. - - `train_model`: Takes the output from `load_data`, computes sums, and prints training details. - -- **Pipeline**: `simple_ml_pipeline` orchestrates the execution of `load_data` and `train_model`, demonstrating data flow between steps. +- **`load_data` Step**: Takes an integer parameter and returns a dictionary with training data and labels. +- **`train_model` Step**: Receives the output from `load_data`, computes sums of features and labels, and simulates model training. +- **`simple_ml_pipeline`**: Chains `load_data` and `train_model`, demonstrating data flow between steps. + +This structure illustrates how ZenML manages data through pipelines, ensuring efficient processing and tracking. ================================================== @@ -18591,92 +18521,89 @@ def simple_ml_pipeline(parameter: int): # ZenML Core Concepts Summary -**ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines. It facilitates collaboration among data scientists, ML engineers, and MLOps developers through various concepts categorized into three main threads: **Development**, **Execution**, and **Management**. +**ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines. It facilitates collaboration among data scientists, ML engineers, and MLOps developers. The core concepts of ZenML can be categorized into three main threads: + +1. **Development**: Focuses on designing machine learning workflows. +2. **Execution**: Involves utilizing MLOps tooling and infrastructure during workflow execution. +3. **Management**: Pertains to establishing and maintaining production-grade solutions. ## 1. Development ### Steps -- **Steps** are functions decorated with `@step`. They can have typed inputs and outputs. - -```python -@step -def step_1() -> str: - return "world" - -@step(enable_cache=False) -def step_2(input_one: str, input_two: str) -> str: - return f"{input_one} {input_two}" -``` +- Functions decorated with `@step`. +- Example: + ```python + @step + def step_1() -> str: + return "world" + ``` ### Pipelines -- **Pipelines** consist of a series of steps, defined using decorators or classes. Steps can only call other steps and can use outputs from previous steps or direct JSON-serializable values. - -```python -@pipeline -def my_pipeline(): - output_step_one = step_1() - step_2(input_one="hello", input_two=output_step_one) - -if __name__ == "__main__": - my_pipeline() -``` +- A pipeline consists of a series of steps, defined using decorators or classes. +- Example: + ```python + @pipeline + def my_pipeline(): + output_step_one = step_1() + step_2(input_one="hello", input_two=output_step_one) + ``` ### Artifacts -- **Artifacts** are data tracked and stored by ZenML, produced by steps. They are serialized/deserialized using **Materializers**. +- Represent data passing through steps, automatically tracked and stored by ZenML. ### Models -- **Models** represent outputs of training processes, including weights and metadata. They are managed through the ZenML API. +- Represent outputs of training processes and associated metadata. ### Materializers -- **Materializers** define how artifacts are serialized/deserialized, using the `BaseMaterializer` class. Custom materializers can be created for unsupported data types. +- Define serialization/deserialization of artifacts. Custom materializers can be created if needed. ### Parameters & Settings -- Steps can take parameters, which ZenML tracks for reproducibility. **Settings** configure runtime configurations for pipelines. +- Steps can take parameters, which are stored by ZenML for reproducibility. ### Model Versions -- A **Model** consists of multiple versions, linking various entities for centralized management. +- A model can have multiple versions, linking all entities to a centralized view. ## 2. Execution ### Stacks & Components -- A **Stack** is a collection of components for executing pipelines, including orchestrators and artifact stores. +- A **Stack** is a collection of components (e.g., orchestrators, artifact stores) for executing pipelines. ### Orchestrator -- The **Orchestrator** coordinates step execution in a pipeline, managing dependencies. ZenML includes a local orchestrator for initial development. +- Coordinates the execution of steps in a pipeline. ZenML provides a local orchestrator for experimentation. ### Artifact Store -- The **Artifact Store** tracks and versions artifacts, enabling features like data caching. +- Houses all data passing through the pipeline, enabling features like data caching. ### Flavor -- **Flavors** are tailored solutions for stack components, allowing users to create custom implementations. +- Base abstractions for stack components that can be customized for specific use cases. ### Stack Switching -- ZenML allows easy switching between local and cloud stacks via CLI commands. +- Easily switch between local and remote stacks with a single CLI command. ## 3. Management ### ZenML Server -- The **ZenML Server** is required for remote stack components, managing entities like pipelines and models. +- Required for using remote stack components and managing ZenML entities (pipelines, models, etc.). ### Server Deployment -- Users can deploy the ZenML server via the **ZenML Pro SaaS** or self-hosting. +- Options include ZenML Pro SaaS or self-hosted deployment. ### Metadata Tracking -- The server tracks metadata around pipeline runs, aiding in troubleshooting. +- The ZenML Server tracks metadata around pipeline runs, aiding in troubleshooting. ### Secrets Management -- The server acts as a centralized secrets store for sensitive data, configurable with various backends. +- Centralized secrets store for sensitive data, configurable with various backends (e.g., AWS Secrets Manager). ### Collaboration -- ZenML promotes collaboration among team members, allowing sharing of pipelines and resources. +- Facilitates teamwork among diverse roles in MLOps, allowing sharing of pipelines and resources. ### Dashboard -- The **ZenML Dashboard** visualizes pipelines and stacks, facilitating collaboration and management. +- Visual interface to manage pipelines, stacks, and components, enhancing collaboration. ### VS Code Extension -- A **VS Code extension** allows interaction with ZenML stacks and pipelines directly from the editor. +- Allows interaction with ZenML stacks and runs directly from the VS Code editor. -This summary encapsulates the core concepts of ZenML, providing a foundation for understanding its functionalities and components. +This summary encapsulates the essential technical information and key points of ZenML's core concepts, enabling effective understanding and interaction with the framework. ================================================== @@ -18684,44 +18611,38 @@ This summary encapsulates the core concepts of ZenML, providing a foundation for # ZenML System Architecture Overview -This guide outlines the deployment options for ZenML, including ZenML OSS (self-hosted), ZenML Pro (SaaS or self-hosted), and their respective components. - -## ZenML OSS (Self-hosted) +## Deployment Options +ZenML can be deployed in various configurations: self-hosted OSS, SaaS, or self-hosted ZenML Pro. -ZenML OSS consists of: -- **ZenML OSS Server**: A FastAPI app managing metadata for pipelines, artifacts, and stacks. -- **OSS Metadata Store**: Stores ML metadata, including tracking and versioning information. +### ZenML OSS (Self-hosted) +- **ZenML OSS Server**: A FastAPI application managing metadata for pipelines, artifacts, and stacks. +- **OSS Metadata Store**: Stores all tenant metadata, including ML tracking and versioning. - **OSS Dashboard**: A ReactJS app displaying pipelines and runs. - **Secrets Store**: Secure storage for credentials needed to access infrastructure services. -ZenML OSS is available under the Apache 2.0 license. For deployment instructions, refer to the [deployment guide](./deploying-zenml/README.md). - -## ZenML Pro (SaaS or Self-hosted) +ZenML OSS is free under the Apache 2.0 license. For deployment details, refer to the [deployment guide](./deploying-zenml/README.md). -ZenML Pro enhances OSS with: +### ZenML Pro (SaaS or Self-hosted) - **ZenML Pro Control Plane**: Central management for all tenants. -- **Pro Dashboard**: An upgraded dashboard with additional functionalities. -- **Pro Metadata Store**: A PostgreSQL database for roles, permissions, and tenant management. -- **Pro Add-ons**: Python modules for enhanced capabilities. -- **Identity Provider**: Supports flexible authentication, integrating with Auth0 for cloud deployments or custom OIDC for self-hosted setups. - -ZenML Pro offers various hosting options, from SaaS to fully air-gapped deployments. Existing ZenML OSS deployments can be upgraded to ZenML Pro easily. - -### ZenML Pro SaaS Architecture +- **Pro Dashboard**: Enhanced dashboard functionality over OSS. +- **Pro Metadata Store**: PostgreSQL database for roles, permissions, and tenant management. +- **Pro Add-ons**: Python modules for added features. +- **Identity Provider**: Supports flexible authentication, including integration with Auth0 for cloud deployments and custom OIDC for self-hosted setups. -In the SaaS model: -- ZenML services are hosted by the ZenML team. -- Customer secrets are managed by the ZenML Pro Control Plane, while ML metadata is stored on ZenML infrastructure. -- Actual ML data artifacts are stored on the customer's cloud, requiring permissions for access. +ZenML Pro offers various hosting options, from SaaS to fully air-gapped deployments. -A hybrid option allows customers to store secrets on their side, connecting their secret store to the ZenML server. +#### ZenML Pro SaaS Architecture +- All ZenML services are hosted by ZenML, with customer secrets managed by the Pro Control Plane. +- ML metadata is stored on ZenML infrastructure, while actual ML data artifacts are stored in the customer's cloud. +- A hybrid option allows customers to store secrets on their side while connecting to the managed ZenML server. -### ZenML Pro Self-Hosted Architecture +#### ZenML Pro Self-Hosted Architecture +- All services, data, and secrets are deployed on the customer's cloud for maximum security. +- For setup inquiries, contact ZenML support. -For self-hosting: -- All services, data, and secrets are deployed on the customer's cloud, ensuring maximum security. +For further details on ZenML Pro concepts, refer to the [core concepts guide](../getting-started/zenml-pro/core-concepts.md). -For detailed architecture diagrams and further information, refer to the respective sections in the documentation. Interested users can sign up for a free trial of ZenML Pro [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). +For a free trial of ZenML Pro, sign up [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). ================================================== @@ -18729,7 +18650,8 @@ For detailed architecture diagrams and further information, refer to the respect # ZenML Installation and Getting Started -**ZenML** is a Python package that can be installed via `pip`: +## Installation +**ZenML** is a Python package installable via `pip`: ```shell pip install zenml @@ -18738,80 +18660,78 @@ pip install zenml **Supported Python Versions:** ZenML supports **Python 3.9, 3.10, 3.11, and 3.12**. ## Dashboard Installation -To access the ZenML web dashboard locally, install the optional dependencies: +To use the ZenML web dashboard locally, install the optional server dependencies: ```shell pip install "zenml[server]" ``` -**Recommendation:** Use a virtual environment (e.g., [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) or [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv)). +**Recommendation:** Use a virtual environment (e.g., `virtualenvwrapper`, `pyenv-virtualenv`). ## MacOS Installation (Apple Silicon) -Set the following environment variable for proper server connections: +Set the following environment variable for local server connections: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` -This is necessary for local server use on Macs with Apple Silicon. +This is not needed if using ZenML as a client. ## Nightly Builds -ZenML nightly builds are available under the `zenml-nightly` package name. To install: +For the latest unstable features, install the nightly build: ```shell pip install zenml-nightly ``` -These builds are from the latest `develop` branch and may not be stable. - ## Verifying Installation -Check installation success via Bash: +Check the installation via Bash or Python: +Bash: ```bash zenml version ``` -Or through Python: - +Python: ```python import zenml print(zenml.__version__) ``` -## Running with Docker -ZenML is available as a Docker image. To start in a bash environment: +## Docker Usage +ZenML is available as a Docker image: +Start a bash environment: ```shell docker run -it zenmldocker/zenml /bin/bash ``` -To run the ZenML server with Docker: - +Run the ZenML server: ```shell docker run -it -d -p 8080:8080 zenmldocker/zenml-server ``` ## Deploying the Server -For local use with the dashboard: +To run ZenML locally with the dashboard: ```shell pip install "zenml[server]" zenml login --local # opens the dashboard locally ``` -For advanced features, deploy a centrally-accessible ZenML server. Options include [self-hosting](deploying-zenml/README.md) or registering for a free [ZenML Pro](https://cloud.zenml.io/signup?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) account. +For advanced features, consider deploying a centrally-accessible ZenML server. Options include [self-hosting](deploying-zenml/README.md) or signing up for a free [ZenML Pro](https://cloud.zenml.io/signup) account. ================================================== === File: docs/book/getting-started/zenml-pro/teams.md === -### Summary of Teams in ZenML Pro +### ZenML Pro Teams Overview -**Overview**: Teams in ZenML Pro facilitate efficient user management within organizations and tenants by grouping users into a single entity. This guide covers the creation, management, and effective use of teams in MLOps workflows. +**Description**: Learn how to manage user groups in ZenML Pro through the concept of Teams, which helps streamline user management and access control in MLOps workflows. #### Key Benefits of Teams 1. **Group Management**: Manage permissions for multiple users simultaneously. -2. **Organizational Structure**: Align teams with your company's structure or projects. +2. **Organizational Structure**: Reflect your company's structure or project teams. 3. **Simplified Access Control**: Assign roles to teams instead of individual users. #### Creating and Managing Teams @@ -18820,7 +18740,7 @@ For advanced features, deploy a centrally-accessible ZenML server. Options inclu 2. Click on the "Teams" tab. 3. Use the "Add team" button. - **Required Information**: +- **Required Information**: - Team name - Description (optional) - Initial team members @@ -18828,25 +18748,25 @@ For advanced features, deploy a centrally-accessible ZenML server. Options inclu #### Adding Users to Teams 1. Go to the "Teams" tab in Organization settings. 2. Select the desired team. -3. Click "Add Members." +3. Click "Add Members". 4. Choose users to add. #### Assigning Teams to Tenants -1. Navigate to the tenant settings page. -2. Click on the "Members" tab, then "Teams." -3. Select "Add Team." +1. Go to the tenant settings page. +2. Click on the "Members" tab, then "Teams". +3. Select "Add Team". 4. Choose the team and assign a role. #### Team Roles and Permissions -- Roles assigned to teams within a tenant are inherited by all team members. Roles can be predefined (Admin, Editor, Viewer) or custom. For example, assigning the "Editor" role grants all team members Editor permissions in that tenant. +- Roles assigned to teams within a tenant are inherited by all team members. Roles can be predefined (Admin, Editor, Viewer) or custom. #### Best Practices -1. **Reflect Organization**: Create teams that mirror your organizational structure. -2. **Use Custom Roles**: Combine teams with custom roles for precise access control. -3. **Regular Audits**: Periodically review team memberships and roles. -4. **Document Purposes**: Maintain documentation on each team's purpose and associated projects. +1. **Reflect Your Organization**: Create teams that mirror your structure. +2. **Combine with Custom Roles**: Use custom roles for detailed access control. +3. **Regular Audits**: Review team memberships and roles periodically. +4. **Document Team Purposes**: Keep clear documentation on each team's purpose and projects. -By utilizing Teams in ZenML Pro, organizations can enhance user management, streamline access control, and improve MLOps workflows. +By utilizing Teams in ZenML Pro, you can enhance user management, simplify access control, and improve organization in MLOps workflows. ================================================== @@ -18854,114 +18774,77 @@ By utilizing Teams in ZenML Pro, organizations can enhance user management, stre # Organizations in ZenML Pro -ZenML Pro organizes work around the concept of an **Organization**, which is the highest structural level in the ZenML Cloud environment. An organization typically includes a group of users and one or more [tenants](./tenants.md). +In ZenML Pro, an **Organization** is the highest-level structure within the ZenML Cloud environment, encompassing a group of users and one or more [tenants](./tenants.md). ## Inviting Team Members -To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The invited user will receive an email. Users can log in to all tenants they have access to within the organization. + +To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The invited user will receive an email. Users can use their login across all accessible tenants. ## Managing Organization Settings -Organization settings, including billing and member roles, are managed at the organization level. Access these settings by clicking your profile picture in the top right corner and selecting "Settings". + +Organization settings, including billing information and member roles, can be managed by clicking your profile picture in the top right corner and selecting "Settings". ## API Operations -Additional operations related to organizations can be performed via the API. More details can be found at [ZenML Cloud API](https://cloudapi.zenml.io/). + +Various operations related to Organizations can be performed via the API. More information is available at [ZenML Cloud API](https://cloudapi.zenml.io/). ================================================== === File: docs/book/getting-started/zenml-pro/self-hosted.md === -# ZenML Pro Self-Hosted Deployment Guide Summary - -## Overview -ZenML Pro can be self-hosted in a Kubernetes cluster, requiring access to ZenML Pro container images and infrastructure components like a Kubernetes cluster, database server, load balancer, Ingress controller, HTTPS certificates, and DNS rules. Note that features like SSO and Run Templates are not available in the on-prem version. - -## Preparation and Prerequisites +### ZenML Pro Self-Hosted Deployment Guide -### Software Artifacts -- **Control Plane Artifacts**: - - API Server: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api` - - Dashboard: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` - - Helm Chart: `oci://public.ecr.aws/zenml/zenml-pro` +This guide outlines the steps to install ZenML Pro, including the Control Plane and Tenant servers, in a Kubernetes cluster. -- **Tenant Server Artifacts**: - - Server: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` - - OSS Helm Chart: `oci://public.ecr.aws/zenml/zenml` +#### Overview +ZenML Pro requires access to private container images and a self-provided infrastructure, including a Kubernetes cluster, a database server, and prerequisites for exposing services via HTTPS (load balancer, Ingress controller, SSL certificates, and DNS rules). Notably, Single Sign-On (SSO) and Run Templates are not available in the on-prem version. -- **Client Artifacts**: - - Client Image: `zenmldocker/zenml` (available on Docker Hub). +#### Prerequisites +1. **Software Artifacts**: Access to ZenML Pro container images and Helm charts is necessary. Contact ZenML for access. + - **Control Plane Artifacts**: + - API: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api` + - Dashboard: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` + - Helm Chart: `oci://public.ecr.aws/zenml/zenml-pro` + - **Tenant Server Artifacts**: + - Server: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` + - OSS Helm Chart: `oci://public.ecr.aws/zenml/zenml` + - **Client Artifacts**: Public client image at `zenmldocker/zenml` on Docker Hub. + +2. **Accessing Container Images**: Currently, ZenML Pro images are available only in AWS ECR. For access: + - Create an AWS IAM user/role with `AmazonEC2ContainerRegistryReadOnly` permissions. + - Authenticate Docker with ECR using the AWS CLI. + +3. **Air-Gapped Installation**: For environments without internet access, download artifacts using a machine with internet, save them, and transfer to the air-gapped environment. + +#### Infrastructure Requirements +1. **Kubernetes Cluster**: A functional cluster is required. +2. **Database Server**: Connect to an external MySQL or Postgres database (Postgres for Control Plane, MySQL for Tenant servers). +3. **Ingress Controller**: Install and configure an Ingress provider (e.g., NGINX). +4. **Domain Name**: Obtain an FQDN for the Control Plane and tenants. +5. **SSL Certificate**: Generate and configure SSL certificates for secure connections. -### Accessing ZenML Pro Container Images -Currently, ZenML Pro images are available only in AWS ECR. To access them: -1. Create an AWS account. -2. Create an IAM user/role with the `AmazonEC2ContainerRegistryReadOnly` policy. -3. Provide the IAM user/role ARN to ZenML Support for access. +#### Installation Steps +1. **Configure Helm Chart**: Customize the `values.yaml` file for your deployment. + - Key configurations include database credentials, server URL, and Ingress settings. -### AWS Authentication Steps -1. **Install AWS CLI** and configure credentials. -2. **Authenticate Docker** with ECR: +2. **Install Control Plane**: ```bash - aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin 715803424590.dkr.ecr.eu-west-1.amazonaws.com + helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version --values my-values.yaml ``` -### Air-Gapped Installation -For environments without internet access: -1. Prepare a machine with internet access to download artifacts. -2. Use a script to download images and Helm charts. -3. Transfer artifacts to the air-gapped environment and load them into Docker. -4. Update Helm values to point to your internal registry. - -### Infrastructure Requirements -1. **Kubernetes Cluster**: A functional cluster is required. -2. **Database Server**: MySQL or Postgres for the Control Plane; only MySQL for Tenant servers. -3. **Ingress Controller**: For HTTP(S) traffic routing. -4. **Domain Name**: FQDN for the Control Plane and tenants. -5. **SSL Certificate**: Required for securing traffic. - -## Stage 1: Install ZenML Pro Control Plane - -### Configure the Helm Chart -Customize the Helm chart using a `values.yaml` file. Key configurations include: -- Database credentials -- Server URL -- Ingress settings - -### Install the Helm Chart -Run the following command to install: -```bash -helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version --values my-values.yaml -``` - -### Verify Installation -Check the status of the installation: -```bash -kubectl -n zenml-pro get all -``` +3. **Onboard Additional Users**: Use the provided Python script to create user accounts and manage access. -## Onboard Additional Users -1. Retrieve the admin password: +4. **Enroll and Deploy Tenants**: + - Use the `enroll-tenant.py` script to create a tenant entry and generate a Helm `values.yaml` file. + - Deploy the tenant server using Helm: ```bash - kubectl get secret --namespace zenml-pro zenml-pro -o jsonpath="{.data.ZENML_CLOUD_ADMIN_PASSWORD}" | base64 --decode; echo + helm --namespace zenml-pro- upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version --values .yaml ``` -2. Create a `users.yml` file for new users. -3. Use the `create_users.py` script to onboard users. - -## Stage 2: Enroll and Deploy ZenML Pro Tenants - -### Enroll a Tenant -Run the `enroll-tenant.py` script to create a tenant entry and generate a Helm `values.yaml` file template. - -### Deploy the Tenant Server -Use Helm to install the tenant server: -```bash -helm --namespace zenml-pro- upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version --values .yaml -``` -### Access the Tenant -Log in as an organization member and add yourself as a tenant member to access the tenant dashboard. +#### Accessing the Deployment +After installation, access the ZenML Pro dashboard using the provided credentials. Ensure to follow the onboarding steps for new users and manage tenant access accordingly. -## Important Notes -- Ensure all container image tags are synchronized with Helm chart versions. -- Maintain the same version tags when copying images to internal registries. -- The provided scripts may need adjustments based on specific security requirements and infrastructure setups. +This guide provides a comprehensive overview of the installation and configuration process for ZenML Pro in a self-hosted environment, ensuring all critical details are retained for successful deployment and management. ================================================== @@ -18969,23 +18852,23 @@ Log in as an organization member and add yourself as a tenant member to access t # ZenML Pro Core Concepts -ZenML Pro features a distinct entity hierarchy compared to the open-source version. Key components include: +ZenML Pro introduces a distinct entity hierarchy compared to the open-source version. Key components include: - **Organization**: A collection of users, teams, and tenants. -- **Tenant**: An isolated ZenML server deployment containing project resources. -- **Teams**: User groups within an organization for resource access management. +- **Tenant**: An isolated ZenML server deployment containing all project resources. +- **Teams**: Groups of users within an organization for resource management. - **Users**: Individual accounts on a ZenML Pro instance. -- **Roles**: Define user permissions within a tenant or organization. -- **Templates**: Re-runnable pipeline configurations. +- **Roles**: Control user actions within a tenant or organization. +- **Templates**: Configurable pipeline runs that can be re-executed. -For more detailed information, refer to the following links: +For more details, refer to the following resources: -| Concept | Description | Link | -|------------------|-----------------------------------------------|-----------------------| -| Organizations | Managing organizations in ZenML Pro | [organization.md](./organization.md) | -| Tenants | Working with tenants in ZenML Pro | [tenants.md](./tenants.md) | -| Teams | Team management in ZenML Pro | [teams.md](./teams.md) | -| Roles & Permissions | Role-based access control in ZenML Pro | [roles.md](./roles.md) | +| **Concept** | **Description** | **Link** | +|-------------------|------------------------------------------------|------------------------| +| Organizations | Managing organizations in ZenML Pro | [organization.md](./organization.md) | +| Tenants | Working with tenants in ZenML Pro | [tenants.md](./tenants.md) | +| Teams | Team management in ZenML Pro | [teams.md](./teams.md) | +| Roles & Permissions| Role-based access control in ZenML Pro | [roles.md](./roles.md) | ================================================== @@ -18993,41 +18876,45 @@ For more detailed information, refer to the following links: ### ZenML Pro API Overview -The ZenML Pro API is a RESTful API compliant with OpenAPI 3.1.0, designed for managing ZenML resources across both SaaS and self-hosted instances. The SaaS version is accessible at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). +The ZenML Pro API is a RESTful API compliant with OpenAPI 3.1.0, enabling interaction with ZenML resources for both SaaS and self-hosted instances. The SaaS version is accessible at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). #### Key Features -- **Tenant Management** - - List tenants: `GET /tenants` - - Create a tenant: `POST /tenants` - - Get tenant details: `GET /tenants/{tenant_id}` - - Update a tenant: `PATCH /tenants/{tenant_id}` - -- **Organization Management** - - List organizations: `GET /organizations` - - Create an organization: `POST /organizations` - - Get organization details: `GET /organizations/{organization_id}` - - Update an organization: `PATCH /organizations/{organization_id}` - -- **User Management** - - List users: `GET /users` - - Get current user: `GET /users/me` - - Update user: `PATCH /users/{user_id}` - -- **Role-Based Access Control (RBAC)** - - Create a role: `POST /roles` - - Assign a role: `POST /roles/{role_id}/assignments` - - Check permissions: `GET /permissions` - -#### Authentication -To authenticate, log into your ZenML Pro account via the browser. Programmatic access is currently unsupported. +- **Tenant Management**: Create, list, get details, and update tenants. +- **Organization Management**: Manage organizations similarly. +- **User Management**: List users, get current user info, and update user details. +- **Role-Based Access Control (RBAC)**: Create roles, assign roles, and check permissions. +- **Authentication**: Requires user login for request authentication. Programmatic access is currently unavailable. + +#### Important API Endpoints +- **Tenant Management**: + - `GET /tenants`: List tenants + - `POST /tenants`: Create a tenant + - `GET /tenants/{tenant_id}`: Get tenant details + - `PATCH /tenants/{tenant_id}`: Update a tenant + +- **Organization Management**: + - `GET /organizations`: List organizations + - `POST /organizations`: Create an organization + - `GET /organizations/{organization_id}`: Get organization details + - `PATCH /organizations/{organization_id}`: Update an organization + +- **User Management**: + - `GET /users`: List users + - `GET /users/me`: Get current user + - `PATCH /users/{user_id}`: Update user + +- **Role-Based Access Control**: + - `POST /roles`: Create a role + - `POST /roles/{role_id}/assignments`: Assign a role + - `GET /permissions`: Check permissions #### Error Handling -The API uses standard HTTP status codes. Error responses include a message and additional details. +Standard HTTP status codes indicate request success or failure. Error responses include a message and additional details. #### Rate Limiting -The API may enforce rate limits. Exceeding these limits results in a 429 status code. Implement backoff and retry logic for handling such cases. +The API may enforce rate limits. Exceeding these limits results in a 429 status code. Implement backoff and retry logic accordingly. -For comprehensive API details, visit [https://cloudapi.zenml.io](https://cloudapi.zenml.io). +For comprehensive details on endpoints, request/response schemas, and features, refer to the full documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ================================================== @@ -19035,97 +18922,91 @@ For comprehensive API details, visit [https://cloudapi.zenml.io](https://cloudap ### ZenML Pro: Roles and Permissions Overview -ZenML Pro utilizes a role-based access control (RBAC) system to manage permissions for users and teams. This guide outlines the predefined roles, how to assign them, and the creation of custom roles. +ZenML Pro utilizes a role-based access control (RBAC) system to manage permissions for users and teams. This guide outlines the available roles, assignment procedures, and custom role creation. #### Organization-Level Roles -Three predefined roles exist at the organization level: - -1. **Org Admin**: Full control, can manage members, tenants, billing, and roles. -2. **Org Editor**: Can manage tenants and teams but lacks access to subscription info and deletion rights. +1. **Org Admin**: Full control; can manage members, tenants, billing, and roles. +2. **Org Editor**: Manages tenants and teams; no access to subscription info or deletion rights. 3. **Org Viewer**: Read-only access to tenants. -**Assigning Organization Roles**: -- Navigate to Organization settings > Members tab. -- Update existing member roles or add new members. +**Assignment Steps**: +- Go to Organization settings > Members tab. +- Update roles or add new members. **Notes**: -- Organization admins can add themselves to any tenant role. -- Editors and viewers cannot add themselves to tenants they are not part of. -- Custom organization roles can be created via the [ZenML Pro API](https://cloudapi.zenml.io/). +- Admins can assign themselves any tenant role. +- Editors and viewers cannot access tenants they are not part of. +- Custom organization roles can be created via the ZenML Pro API. #### Tenant-Level Roles -Tenant roles define user permissions within a specific tenant. Predefined roles include: - +Roles dictate permissions within a specific tenant. Predefined roles include: 1. **Admin**: Full control over tenant resources. -2. **Editor**: Can create and share resources but cannot modify or delete them. +2. **Editor**: Can create and share resources; cannot modify or delete. 3. **Viewer**: Read-only access. -**Creating Custom Roles**: -1. Go to tenant settings > Roles > Add Custom Role. -2. Provide a name, description, and select a base role. -3. Edit permissions for various resources (e.g., Artifacts, Models). +**Custom Roles Creation**: +1. Access tenant settings > Roles. +2. Click "Add Custom Role". +3. Define name, description, and base role. +4. Edit permissions for various resources (e.g., Artifacts, Models). **Managing Role Permissions**: -- Access Roles page in tenant settings. -- Select a role and click "Edit Permissions" to adjust resource permissions. +- Go to tenant settings > Roles. +- Select a role and click "Edit Permissions" to adjust. -#### Sharing Individual Resources +#### Sharing Resources Users can share individual resources directly through the dashboard. #### Best Practices 1. **Least Privilege**: Assign minimal necessary permissions. 2. **Regular Audits**: Review role assignments periodically. -3. **Use Custom Roles**: Tailor roles for specific team needs. -4. **Document Roles**: Keep records of custom roles and their purposes. +3. **Custom Roles**: Tailor roles for specific team needs. +4. **Documentation**: Keep records of custom roles and their purposes. -By effectively utilizing ZenML Pro's RBAC, organizations can secure resources while promoting collaboration in MLOps projects. +By implementing ZenML Pro's RBAC, teams can ensure appropriate access levels, enhancing security and collaboration in MLOps projects. ================================================== === File: docs/book/getting-started/zenml-pro/tenants.md === -### ZenML Pro Tenants Documentation Summary +### ZenML Pro Tenants Overview -**Overview of Tenants** -- Tenants are isolated deployments of the ZenML server, each with its own users, roles, and resources. -- All ZenML Pro activities (pipelines, stacks, runs, connectors) are scoped to a tenant. -- ZenML Pro offers enhanced features over the open-source version. +**Definition**: Tenants in ZenML Pro are isolated deployments of the ZenML server, each with its own users, roles, and resources. All operations, such as pipelines, stacks, and runs, are scoped to a tenant. -**Creating a Tenant** +**Creating a Tenant**: 1. Navigate to your organization page. 2. Click "+ New Tenant". 3. Name your tenant and click "Create Tenant". -4. Optionally, create a tenant via the Cloud API using `POST /organizations`. -**Organizing Tenants** +Alternatively, create a tenant via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. + +**Organizing Tenants**: - **By Development Stage**: - **Staging Tenants**: For development and testing. - - **Production Tenants**: For live services with stricter controls and performance optimization. + - **Production Tenants**: For live services with stricter access controls and monitoring. - **By Business Logic**: - **Project-based**: Separate tenants for different ML projects (e.g., Recommendation System, NLP). - - **Team-based**: Align with organizational structure (e.g., Data Science Team). - - **Data Sensitivity**: Classify tenants based on data sensitivity (e.g., Public, Internal). + - **Team-based**: Align tenants with organizational teams (e.g., Data Science, ML Engineering). + - **Data Sensitivity**: Classify tenants based on data sensitivity (e.g., Public, Internal, Confidential). -**Best Practices for Tenant Organization** +**Best Practices**: 1. Use clear naming conventions. 2. Implement role-based access control. -3. Maintain documentation for each tenant's purpose. +3. Maintain documentation for each tenant. 4. Conduct regular reviews of tenant structure. 5. Ensure scalability for future growth. -**Using Your Tenant** -- Tenants enable running pipelines, experiments, and accessing Pro features like: - - Model Control Plane - - Artifact Control Plane - - Dashboard pipeline execution - - Pipeline run templates +**Using Your Tenant**: +Tenants enable running pipelines, experiments, and accessing Pro features like: +- Model Control Plane +- Artifact Control Plane +- Pipeline execution from the Dashboard -**Accessing Tenant Documentation** -- Each tenant has a connection URL for the `zenml` client and OpenAPI specification. -- Access documentation at `/docs` for available methods, including REST API pipeline execution. +**Accessing Tenant Documentation**: +Each tenant has a connection URL for the `zenml` client and to access the OpenAPI specification. Visit `/docs` for available methods, including running pipelines via the REST API. -For further details, refer to the specific sections on model management, artifact handling, and API access. +For further details, refer to the [API reference](../../reference/api-reference.md). ================================================== @@ -19133,37 +19014,37 @@ For further details, refer to the specific sections on model management, artifac # ZenML Pro Overview -ZenML Pro enhances the open-source ZenML product with a managed control plane, offering several key features: +ZenML Pro enhances the open-source ZenML product with several advanced features: - **Managed Deployment**: Deploy multiple ZenML servers (tenants). - **User Management**: Create organizations and teams for scalable user management. -- **Role-Based Access Control**: Implement fine-grained access control with customizable roles. -- **Model and Artifact Control Plane**: Utilize the Model Control Plane and Artifact Control Plane for better tracking and management of ML assets. -- **Triggers and Run Templates**: Create and run templates for pipelines via the dashboard or API, facilitating quick iterations. +- **Role-Based Access Control**: Implement customizable roles for secure resource management. +- **Model and Artifact Control Plane**: Utilize the Model Control Plane and Artifact Control Plane for better tracking of ML assets. +- **Triggers and Run Templates**: Create and run templates via the dashboard or API for efficient pipeline management. - **Early-Access Features**: Access pro-specific features like triggers, filters, and usage reports. -For more information, visit the [ZenML website](https://zenml.io/pro) or create a [free account](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). +For more information, visit the [ZenML website](https://zenml.io/pro). -## Deployment Scenarios: SaaS vs Self-hosted +## Deployment Scenarios -ZenML Pro can be deployed as a SaaS solution, minimizing the need for resource allocation to server management, or fully self-hosted. For more details, refer to the [self-hosted deployment guide](./self-hosted.md) or [book a demo](https://www.zenml.io/book-your-demo). +ZenML Pro can be deployed as a SaaS solution or fully self-hosted. The SaaS option simplifies server management, allowing focus on MLOps workflows. For self-hosted deployment, refer to the [self-hosted deployment guide](./self-hosted.md). -### Key Links +### Key Resources - [Tenants](./tenants.md) - [Organizations](./organization.md) - [Teams](./teams.md) - [Roles](./roles.md) - [Self-Hosted Deployments](./self-hosted.md) +For a demo or to create a free account, visit [ZenML Cloud](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). + ================================================== === File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md === -### Custom Secret Stores in ZenML - -The secrets store is essential for managing secret values in ZenML, handling storage, updates, and deletions of secrets while storing metadata in an SQL database. The interface for all secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` module. +### Custom Secret Stores -#### SecretsStoreInterface +The secrets store is essential for managing secrets in ZenML, handling the storage, updating, and deletion of secret values, while ZenML secret metadata is stored in an SQL database. The interface for all secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` module. Below is a summary of the key methods in the `SecretsStoreInterface`: ```python class SecretsStoreInterface(ABC): @@ -19177,26 +19058,26 @@ class SecretsStoreInterface(ABC): @abstractmethod def get_secret_values(self, secret_id: UUID) -> Dict[str, str]: - """Retrieve secret values for an existing secret.""" - + """Get secret values for an existing secret.""" + @abstractmethod def update_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Update secret values for an existing secret.""" - + @abstractmethod def delete_secret_values(self, secret_id: UUID) -> None: """Delete secret values for an existing secret.""" ``` -#### Creating a Custom Secrets Store +### Building a Custom Secrets Store -To implement a custom secrets store: +To create a custom secrets store: -1. **Inherit from Base Class**: Create a class that extends `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the abstract methods from the interface. Set `SecretsStoreType.CUSTOM` as the `TYPE`. - -2. **Configuration Class**: If configuration is needed, create a class inheriting from `SecretsStoreConfiguration` to define parameters, using it as the `CONFIG_TYPE`. +1. **Inherit from Base Class**: Create a class that inherits from `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the abstract methods from the interface. Set `SecretsStoreType.CUSTOM` as the `TYPE`. + +2. **Configuration Class**: If configuration is needed, inherit from `SecretsStoreConfiguration` and define your parameters. Use this as the `CONFIG_TYPE`. -3. **Server Configuration**: Ensure your implementation is included in the ZenML server's container image. Configure the server to use your custom store via environment variables or helm chart values, as detailed in the deployment guide. +3. **Server Configuration**: Ensure your code is included in the ZenML server's container image. Configure the ZenML server to use your custom secrets store via environment variables or helm chart values, as detailed in the deployment guide. For complete documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-zen_stores/#zenml.zen_stores.secrets_stores.secrets_store_interface.SecretsStoreInterface). @@ -19204,47 +19085,55 @@ For complete documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/lat === File: docs/book/getting-started/deploying-zenml/deploy-with-docker.md === -### Summary of ZenML Docker Deployment Documentation +### Summary: Deploying ZenML in a Docker Container -**Overview**: This documentation provides guidance on deploying the ZenML server using Docker, including configuration options, local deployment, and advanced use cases. +**Overview**: The ZenML server can be deployed using the Docker container image `zenmldocker/zenml-server`. This guide outlines configuration options and deployment methods, including local testing and advanced configurations. -#### Quick Local Deployment -To deploy ZenML locally using Docker without complex configurations, run: +#### Local Deployment +For a quick local deployment, ensure Docker is running and execute: ```bash zenml login --local --docker ``` -This command sets up a local ZenML server with a shared SQLite database. +This command sets up a ZenML server using a local SQLite database. -#### ZenML Server Configuration Options -When deploying a custom ZenML server, configure the following environment variables: +#### Configuration Options +When deploying a custom ZenML server, configure environment variables for settings like database connections and user details. Key environment variables include: -- **ZENML_STORE_URL**: URL for SQLite or MySQL database. +- **ZENML_STORE_URL**: Database URL (SQLite or MySQL). - SQLite: `sqlite:////path/to/zenml.db` - MySQL: `mysql://username:password@host:port/database` - **ZENML_STORE_SSL_CA, ZENML_STORE_SSL_CERT, ZENML_STORE_SSL_KEY**: SSL configurations for MySQL connections. + +- **ZENML_LOGGING_VERBOSITY**: Controls log verbosity (e.g., `INFO`, `DEBUG`). + +- **ZENML_SERVER_RATE_LIMIT_ENABLED**: Enables rate limiting for the API. -- **ZENML_LOGGING_VERBOSITY**: Set log level (e.g., `DEBUG`, `INFO`). - -- **ZENML_STORE_BACKUP_STRATEGY**: Backup strategy for the database (default is `in-memory`). +If no `ZENML_STORE_*` variables are set, an SQLite database is created at `/zenml/.zenconfig/local_stores/default_zen_store/zenml.db`. -- **ZENML_SERVER_RATE_LIMIT_ENABLED**: Enable rate limiting for API requests. +#### Secret Store Configuration +The default secret store is the SQL database. For external secret management (AWS, GCP, Azure, HashiCorp), set the following: -- **ZENML_SECRETS_STORE_TYPE**: Specify the type of secrets store (e.g., `sql`, `aws`, `gcp`, `azure`, `hashicorp`, `custom`). +- **ZENML_SECRETS_STORE_TYPE**: Type of secret store (e.g., `sql`, `aws`, `gcp`, `azure`, `hashicorp`, `custom`). +- **ZENML_SECRETS_STORE_ENCRYPTION_KEY**: Key for encrypting secrets (recommended length: 32 characters). -#### Secrets Store Configuration -- **Default**: Uses SQL database as a secrets store. -- **AWS**: Requires permissions for `secretsmanager` actions. -- **GCP**: Requires permissions for `secretmanager` actions. -- **Azure**: Requires Azure Key Vault configurations. -- **HashiCorp**: Requires Vault server URL and token. - -#### Running ZenML Server with Docker -To run the ZenML server with default settings: +#### Running the ZenML Server +To run the ZenML server with Docker: ```bash docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server ``` -For a MySQL database: +Access the dashboard at `http://localhost:8080`. + +#### Persisting Data +To persist the SQLite database: +```bash +mkdir zenml-server +docker run -it -d -p 8080:8080 --name zenml \ + --mount type=bind,source=$PWD/zenml-server,target=/zenml/.zenconfig/local_stores/default_zen_store \ + zenmldocker/zenml-server +``` + +For MySQL, run: ```bash docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 ``` @@ -19256,7 +19145,7 @@ docker run -it -d -p 8080:8080 --name zenml \ ``` #### Using Docker Compose -Create a `docker-compose.yml` for running ZenML with MySQL: +Create a `docker-compose.yml`: ```yaml version: "3.9" services: @@ -19268,20 +19157,14 @@ services: image: zenmldocker/zenml-server environment: - ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml - depends_on: - - mysql ``` -Start with: +Run: ```bash docker compose up -d ``` #### Backup and Recovery -ZenML automatically backs up the database before migrations. Configure backup strategy with `ZENML_STORE_BACKUP_STRATEGY`: -- `disabled`: No backup. -- `in-memory`: Fast but not persistent. -- `database`: Backup to a separate database. -- `dump-file`: Backup to a filesystem location. +Automated backups occur before database migrations. Configure backup strategies with `ZENML_STORE_BACKUP_STRATEGY` (e.g., `in-memory`, `database`, `dump-file`). #### Troubleshooting Check logs with: @@ -19289,48 +19172,50 @@ Check logs with: - For manual Docker deployments: `docker logs zenml -f` - For Docker Compose: `docker compose logs -f` -This documentation serves as a comprehensive guide for deploying and managing ZenML in a Docker environment, covering essential configurations and operational commands. +This guide provides essential commands and configurations for deploying and managing a ZenML server using Docker. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.md === -### Deploying ZenML to Hugging Face Spaces +### Deploying ZenML on HuggingFace Spaces -**Overview**: Hugging Face Spaces allows for quick, free deployment of ZenML, ideal for testing without infrastructure overhead. For production, ensure persistent storage is enabled to avoid data loss. +HuggingFace Spaces allows for quick deployment of ZenML, facilitating ML project hosting without infrastructure overhead. For production use, enable [persistent storage](https://huggingface.co/docs/hub/en/spaces-storage) to prevent data loss. -**Deployment Steps**: -1. **Create a Space**: Click [here](https://huggingface.co/new-space?template=zenml/zenml) to start. Specify: - - Owner (personal or organization) +#### Deployment Steps +1. **Create a Space**: Click [here](https://huggingface.co/new-space?template=zenml/zenml) to set up your ZenML app. Specify: + - Owner (personal account or organization) - Space name - Visibility (set to 'Public' for local connections) -2. **Select Machine Type**: Choose a higher-tier paid CPU instance to avoid auto-shutdowns. +2. **Select Machine**: Choose a higher-tier machine to avoid auto-shutdowns. Consider setting up a MySQL database for persistent storage. -3. **Customize Appearance**: Modify metadata in `README.md` for titles and colors. Refer to the [Hugging Face documentation](https://huggingface.co/docs/hub/spaces-config-reference) for configuration details. +3. **Customize Appearance**: Modify the `README.md` file in "Files and Versions" to personalize your Space. -4. **Monitor Deployment**: Wait for the status to change from 'Building' to 'Running'. If the ZenML UI is not visible, refresh the page. +4. **Monitor Status**: After creation, watch for 'Building' to switch to 'Running'. Refresh if the ZenML login UI isn't visible. -5. **Access Your Space**: Use the "Embed this Space" option to get the "Direct URL" (e.g., `https://-.hf.space`) to initialize your ZenML server. +5. **Get Direct URL**: Use the "Embed this Space" option to copy the "Direct URL" (format: `https://-.hf.space`) for initializing your ZenML server. -**Connecting to ZenML Server**: -- Use the following CLI command after installing ZenML: - ```shell - zenml login '' - ``` -- Access the ZenML dashboard directly via the Direct URL. +#### Connecting to ZenML Server +To connect from your local machine: +```shell +zenml login '' +``` +Ensure the Space visibility is set to 'Public'. -**Configuration Options**: -- Default uses SQLite (non-persistent). For a persistent database, modify the `Dockerfile` in your Space's root directory. Refer to [advanced server configuration options](deploy-with-docker.md#advanced-server-configuration-options) for details. -- For secret management, use Hugging Face's 'Repository secrets' in your `Dockerfile`. If using a cloud secrets backend, update your ZenML server password in the Dashboard settings to secure access. +#### Configuration Options +- **Database**: By default, ZenML uses an SQLite database. For a persistent database, modify the `Dockerfile` in your Space's root directory. +- **Secrets Management**: Use HuggingFace's 'Repository secrets' for managing secrets in your `Dockerfile`. Update your ZenML server password via the Dashboard settings for security. -**Troubleshooting**: -- View logs via the "Open Logs" button for server issues. For further support, contact the [Slack channel](https://zenml.io/slack/). +#### Troubleshooting +Access logs by clicking "Open Logs" for insights into server issues. For additional support, contact the [Slack channel](https://zenml.io/slack/). -**Upgrading ZenML Server**: -- The default space updates automatically. To manually update, select 'Factory reboot' in the 'Settings' tab (note: this will erase data unless using a MySQL persistent database). To use an earlier version, adjust the `FROM` statement in the `Dockerfile`. +#### Upgrading ZenML Server +The default Space uses the latest ZenML version. To update: +- Select 'Factory reboot' in the 'Settings' tab (note: this wipes existing data unless using a MySQL database). +- Change the `FROM` statement in the `Dockerfile` to use an earlier version. -This summary captures the essential steps and configurations for deploying ZenML on Hugging Face Spaces, ensuring critical information is retained for effective use and troubleshooting. +For more details on configuration, refer to the [HuggingFace documentation](https://huggingface.co/docs/hub/spaces-config-reference). ================================================== @@ -19338,102 +19223,123 @@ This summary captures the essential steps and configurations for deploying ZenML ### Summary: Deploying ZenML in a Kubernetes Cluster with Helm -**Overview**: This documentation provides instructions for deploying ZenML in a Kubernetes cluster using Helm, detailing prerequisites, configuration, and deployment scenarios. +#### Overview +ZenML can be deployed in a Kubernetes cluster using a Helm chart, available on [ArtifactHub](https://artifacthub.io/packages/helm/zenml/zenml). This document outlines prerequisites, configuration, and deployment scenarios. #### Prerequisites -- **Kubernetes Cluster**: Required. -- **Database**: Recommended to use a MySQL-compatible database (version 8.0+), but SQLite is the default. -- **Tools**: - - Kubernetes client (`kubectl`) - - Helm -- **Secrets Management**: Optional external secrets manager (e.g., AWS Secrets Manager, GCP Secrets Manager). +- **Kubernetes Cluster** +- **MySQL-Compatible Database** (recommended, version 8.0+) +- **Kubernetes Client** (`kubectl`) +- **Helm** installed +- **Optional**: External Secrets Manager (e.g., AWS Secrets Manager, GCP Secrets Manager) #### ZenML Helm Configuration - Review the [`values.yaml`](https://artifacthub.io/packages/helm/zenml/zenml?modal=values) file for customizable settings. -- Collect database and secrets management information for Helm chart configuration. +- Prepare database and secrets management information for Helm chart configuration. ##### Database Information -For external MySQL-compatible databases, gather: -- Hostname, port, username, password, and database name. -- SSL certificates if using SSL. +For external MySQL-compatible databases: +- Hostname and port +- Username and password (create a dedicated user) +- Database name (can be created by ZenML) +- SSL certificates (if using SSL) ##### Secrets Management Information -For external secrets managers, gather: -- **AWS**: Region, access key ID, and secret access key. -- **GCP**: Project ID, service account with access. -- **Azure**: Key Vault name, tenant ID, client ID, and secret. -- **HashiCorp Vault**: Server URL and access token. +For external secrets management services: +- **AWS**: Region, access key ID, secret access key +- **GCP**: Project ID, service account with access +- **Azure**: Key Vault name, tenant ID, client ID, client secret +- **HashiCorp Vault**: Vault server URL, access token #### Optional Cluster Services -- **Ingress**: Recommended for HTTP/HTTPS exposure. -- **cert-manager**: For managing TLS certificates. +- **Ingress Service**: Recommended for exposing HTTP services (e.g., `nginx-ingress`). +- **Cert-Manager**: For managing TLS certificates. #### ZenML Helm Installation -1. **Configure Helm Chart**: - - Pull the chart: - ```bash - helm pull oci://public.ecr.aws/zenml/zenml --version --untar - ``` - - Create a `custom-values.yaml` file based on `values.yaml` and modify necessary configurations (e.g., database URL, Ingress settings). -2. **Install Helm Chart**: +##### Configure the Helm Chart +1. Pull the Helm chart: ```bash - helm -n install zenml-server . --create-namespace --values custom-values.yaml + helm pull oci://public.ecr.aws/zenml/zenml --version --untar ``` +2. Create a `custom-values.yaml` from `values.yaml` and modify necessary configurations (e.g., database URL, Ingress settings). -3. **Connect to ZenML Server**: - - Activate the server via the URL provided post-deployment. - - Connect local client: - ```bash - zenml login https://zenml.example.com:8080 --no-verify-ssl - ``` - - To disconnect: - ```bash - zenml logout - ``` +##### Install the Helm Chart +Run the following command: +```bash +helm -n install zenml-server . --create-namespace --values custom-values.yaml +``` + +#### Connect to the Deployed ZenML Server +After deployment, activate the ZenML server via its URL. To connect your local client: +```bash +zenml login https://zenml.example.com:8080 --no-verify-ssl +``` +To disconnect: +```bash +zenml logout +``` #### Deployment Scenarios -- **Minimal Deployment**: Uses SQLite and ClusterIP service. - ```yaml - zenml: - ingress: - enabled: false - ``` - Access via port-forwarding: - ```bash - kubectl -n zenml-server port-forward svc/zenml-server 8080:8080 - zenml login http://localhost:8080 - ``` -- **Basic Deployment**: Uses Ingress with TLS. - Install `cert-manager` and `nginx-ingress`: - ```bash - helm repo add jetstack https://charts.jetstack.io - helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true - helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --create-namespace - ``` +1. **Minimal Deployment** (SQLite, no Ingress): + ```yaml + zenml: + ingress: + enabled: false + ``` + +2. **Basic Deployment** (Local DB, Ingress with TLS): + Install `cert-manager` and `nginx-ingress`: + ```bash + helm repo add jetstack https://charts.jetstack.io + helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true + helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --create-namespace + ``` + + Create a `ClusterIssuer` for Let's Encrypt: + ```bash + kubectl apply -f - < + privateKeySecretRef: + name: letsencrypt-staging + solvers: + - http01: + ingress: + class: nginx + EOF + ``` + + Helm values: + ```yaml + zenml: + ingress: + enabled: true + annotations: + cert-manager.io/cluster-issuer: "letsencrypt-staging" + tls: + enabled: true + ``` -- **Shared Ingress Controller**: Solutions for using a dedicated Ingress hostname or URL path. +3. **Shared Ingress Controller**: Use a dedicated hostname or URL path for ZenML if the root path is in use. #### Secrets Store Configuration -- Default is SQL database; can switch to external providers (AWS, GCP, Azure, HashiCorp). -- Example for AWS Secrets Manager: - ```yaml - zenml: - secretsStore: - enabled: true - type: aws - aws: - authMethod: secret-key - authConfig: - region: us-east-1 - aws_access_key_id: - aws_secret_access_key: - ``` +ZenML defaults to using the SQL database for secrets. To use external services, configure the Helm values accordingly. Ensure proper permissions for the chosen secrets management service. #### Backup and Recovery -- Automated database backups are enabled by default. -- Backup strategies include `disabled`, `in-memory`, `database`, and `dump-file`. +ZenML automatically backs up the database before upgrades. Configure backup strategies via `zenml.database.backupStrategy`: +- `disabled` +- `in-memory` +- `database` +- `dump-file` Example configuration for persistent volume backup: ```yaml @@ -19446,7 +19352,7 @@ podSecurityContext: fsGroup: 1000 ``` -This summary captures the essential technical details for deploying ZenML using Helm in a Kubernetes environment, including configuration, installation steps, and backup strategies. +This summary provides a concise overview of deploying ZenML using Helm in a Kubernetes environment, covering prerequisites, configuration, installation, and backup strategies. ================================================== @@ -19454,25 +19360,25 @@ This summary captures the essential technical details for deploying ZenML using ### Deploying ZenML with Custom Docker Images -Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, but custom images may be necessary for: +Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image. Custom images may be necessary for: -- Custom artifact stores requiring artifact visualizations or step logs. -- Forked ZenML repositories with modifications to server/database logic. +- Implementing a custom artifact store for visualizations or step logs. +- Deploying a server based on a fork of the ZenML repository with modifications. -**Note:** Custom Docker images can only be deployed using [Docker](deploy-with-docker.md) or [Helm](deploy-with-helm.md). +**Note:** Custom Docker images are supported only for [Docker](deploy-with-docker.md) or [Helm](deploy-with-helm.md) deployments. ### Build and Push Custom ZenML Server Docker Image -1. **Set Up a Container Registry:** Create an account on a registry like [Docker Hub](https://hub.docker.com/). -2. **Clone ZenML Repository:** Check out the desired branch, e.g., for version 0.41.0: +1. Set up a container registry (e.g., Docker Hub). +2. Clone ZenML and checkout the desired branch: ```bash git checkout release/0.41.0 ``` -3. **Copy Dockerfile:** +3. Copy the base Dockerfile: ```bash cp docker/base.Dockerfile docker/custom.Dockerfile ``` -4. **Modify Dockerfile:** +4. Modify the Dockerfile: - Add dependencies: ```bash RUN pip install @@ -19481,33 +19387,32 @@ Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, ```bash RUN pip install -e .[server,secrets-aws,secrets-gcp,secrets-azure,secrets-hashicorp,s3fs,gcsfs,adlfs,connectors-aws,connectors-gcp,connectors-azure] ``` -5. **Build and Push Image:** +5. Build and push the image: ```bash docker build -f docker/custom.Dockerfile . -t /: --platform linux/amd64 docker push /: ``` -**Info:** To verify your custom image locally, refer to the [Deploy a custom ZenML image via Docker](deploy-with-custom-image.md#deploy-a-custom-zenml-image-via-docker) section. +**Tip:** To verify your custom image locally, refer to the [Deploy a custom ZenML image via Docker](deploy-with-custom-image.md#deploy-a-custom-zenml-image-via-docker) section. ### Deploy ZenML with Your Custom Image #### Deploy via Docker -Refer to the general [ZenML Docker Deployment Guide](deploy-with-docker.md) and replace `zenmldocker/zenml-server` with your custom image: -- To run the ZenML server: - ```bash - docker run -it -d -p 8080:8080 --name zenml /: - ``` -- For `docker-compose`, update `docker-compose.yml`: - ```yaml - services: - zenml: - image: /: - ``` +Follow the [ZenML Docker Deployment Guide](deploy-with-docker.md), replacing `zenmldocker/zenml-server` with your custom image reference: +```bash +docker run -it -d -p 8080:8080 --name zenml /: +``` +For `docker-compose`, modify `docker-compose.yml`: +```yaml +services: + zenml: + image: /: +``` #### Deploy via Helm -Refer to the general [ZenML Helm Deployment Guide](deploy-with-helm.md) and modify the `image` section in `values.yaml`: +Refer to the [ZenML Helm Deployment Guide](deploy-with-helm.md) and adjust the `image` section in `values.yaml`: ```yaml zenml: image: @@ -19522,36 +19427,40 @@ zenml: # Deploying ZenML ## Overview -Deploying ZenML to a production environment provides benefits such as: -1. **Scalability**: Handles large workloads for faster processing. -2. **Reliability**: Ensures high availability and fault tolerance. -3. **Collaboration**: Facilitates teamwork and model iteration. +Deploying ZenML to a production environment offers benefits such as scalability, reliability, and enhanced collaboration. However, it involves complexities in infrastructure setup. ## Components A ZenML deployment includes: -- **FastAPI Server**: Uses SQLite or MySQL as a database. -- **Python Client**: Interacts with the ZenML server. -- **ReactJS Dashboard**: An open-source companion for visualization. -- **Optional**: ZenML Pro API and dashboard. +- **FastAPI server** with SQLite or MySQL database +- **Python Client** for server interaction +- **ReactJS dashboard** (open-source companion) +- **Optional**: ZenML Pro API, database, and dashboard + +For detailed architecture, refer to the [system architecture documentation](../system-architectures.md). ### ZenML Python Client -The ZenML client is a Python package for server interaction, installed via `pip`. It provides a command-line interface for managing stacks and deploying pipelines. For advanced access, the Python SDK allows custom automation. Full documentation is available [here](https://sdkdocs.zenml.io/latest/). +The ZenML client is a Python package for interacting with the ZenML server, installable via `pip`. It provides: +- `zenml` CLI for managing stacks and secrets +- Framework for authoring and deploying pipelines +- Access to metadata via Python SDK for custom automations + +Full documentation for the Python SDK and HTTP API is available at [SDK Docs](https://sdkdocs.zenml.io/latest/) and [API Reference](../../reference/api-reference.md). ## Deployment Scenarios -Initially, ZenML runs locally with an SQLite database, limiting access to cloud components. Use `zenml login --local` to start a local server. For production, deploy the ZenML server centrally to enable team collaboration and cloud component access. +Initially, ZenML runs locally with an SQLite database, limiting access to cloud-based components. Use `zenml login --local` to start a local server. For production, deploy the ZenML server centrally to enable team collaboration and access to cloud components. ## Deployment Options -1. **Managed Deployment**: ZenML Pro offers managed servers (tenants), with data security and metadata tracking handled by ZenML. -2. **Self-hosted Deployment**: Deploy ZenML on your infrastructure using methods like Docker, Helm, or Hugging Face Spaces. This option allows full control and access to the Pro feature set. +1. **Managed Deployment**: Utilize ZenML Pro for a managed control plane, where ZenML handles server maintenance while keeping your data secure. +2. **Self-hosted Deployment**: Deploy ZenML on your infrastructure using methods like Docker, Helm, or Hugging Face Spaces. The Pro version is also available for self-hosted setups. -### Deployment Guides +### Deployment Documentation Refer to the following guides for deployment strategies: - [Deploying ZenML using ZenML Pro](../zenml-pro/README.md) - [Deploy with Docker](./deploy-with-docker.md) - [Deploy with Helm](./deploy-with-helm.md) - [Deploy with HuggingFace Spaces](./deploy-using-huggingface-spaces.md) -This documentation provides essential details for deploying ZenML effectively in a production environment, enhancing machine learning workflows. +Deploying ZenML enhances machine learning workflows, enabling production-level success. ================================================== @@ -19560,9 +19469,8 @@ This documentation provides essential details for deploying ZenML effectively in ### Secret Store Configuration and Management #### Centralized Secrets Store -ZenML offers a centralized secrets management system for secure secret registration and management. Metadata (name, ID, owner, scope) is stored in the ZenML server database, while actual secret values are managed separately in the ZenML Secrets Store. In local deployments, secrets are stored in an SQLite database; for remote servers, they are stored in the configured secrets management back-end. Supported back-ends include: - -- Internal SQL database (default) +ZenML offers a centralized secrets management system for secure registration and management of secrets. Metadata (name, ID, owner, scope) is stored in the ZenML server database, while secret values are managed separately in the ZenML Secrets Store. In local deployments, secrets are stored in SQLite; for remote servers, they are stored in the configured secrets management back-end. Supported back-ends include: +- SQL database (default) - AWS Secrets Manager - GCP Secret Manager - Azure Key Vault @@ -19570,26 +19478,24 @@ ZenML offers a centralized secrets management system for secure secret registrat - Custom implementations #### Configuration and Deployment -To configure the secrets store back-end, select a supported back-end and authentication mechanism during deployment. Use ZenML Service Connector authentication methods for this purpose. Follow the principle of least privilege when configuring access credentials. The ZenML server configuration can be updated and redeployed at any time to switch back-ends. For migration strategies, refer to the documented [secrets migration strategy](secret-management.md#secrets-migration-strategy). +To configure the secrets store back-end, select a supported back-end and authentication method during deployment. Use the ZenML Service Connector for authentication, adhering to the principle of least privilege. The secrets store can be updated anytime by modifying the ZenML Server configuration and redeploying. Follow the documented migration strategy to minimize downtime during changes. #### Backup Secrets Store -A secondary Secrets Store can be configured for high availability, backup, and disaster recovery. Ensure the backup store is in a different location or type than the primary store to avoid issues. The ZenML Server prioritizes the primary store for read/write operations and falls back to the backup if necessary. Use the CLI commands: - +ZenML can connect to a secondary Secrets Store for high availability, backup, and disaster recovery. Ensure the backup store is in a different location than the primary store. The server prioritizes the primary store but falls back to the backup if needed. Use the CLI commands: - `zenml secret backup`: Backs up secrets from the primary to the backup store. - `zenml secret restore`: Restores secrets from the backup to the primary store. #### Secrets Migration Strategy -To change the secrets storage provider or location, follow these steps: - -1. Set the new store as the secondary store. -2. Redeploy the ZenML server. -3. Use `zenml secret backup` to transfer secrets from the primary to the secondary store. -4. Reconfigure the server to make the secondary store primary and remove the old primary. +To change the provider or location of secrets, follow a migration strategy: +1. Configure the ZenML server to use the new store as the secondary. +2. Redeploy the server. +3. Use `zenml secret backup` to transfer secrets from the old store to the new one. +4. Set the new store as primary and remove the old one. 5. Redeploy the server. -This strategy ensures minimal downtime and proper migration of existing secrets. Note that changes in authentication methods or credentials do not require migration if the storage location remains the same. +This strategy is unnecessary if only credentials or authentication methods change without altering the secrets' location. -For more details on deployment scenarios, refer to the ZenML Pro documentation. +For more details on deployment strategies, refer to the ZenML deployment guide. ==================================================