page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
. See the deployment section for more information.The name of your Kubernetes context which points to your remote cluster. Run kubectl config get-contexts to see a list of available contexts. NOTE: this is no longer required if you are using a Service Connector to connect your Kubeflow Orchestrator Stack Component to the remote Kubernetes cluster.
A remote artifact store as part of your stack.
A remote container registry as part of your stack.
We can then register the orchestrator and use it in our active stack. This can be done in two ways:
If you have a Service Connector configured to access the remote Kubernetes cluster, you no longer need to set the kubernetes_context attribute to a local kubectl context. In fact, you don't need the local Kubernetes CLI at all. You can connect the stack component to the Service Connector instead:Copy$ zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubeflow
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully registered orchestrator `<ORCHESTRATOR_NAME>`. | stack-components | https://docs.zenml.io/stack-components/orchestrators/kubeflow | 224 |
β β β β βββββββββββ·βββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββ·βββββββββ
The following lists all Kubernetes clusters accessible through the GCP Service Connector:
zenml service-connector verify gcp-user-account --resource-type kubernetes-cluster
Example Command Output
Service connector 'gcp-user-account' is correctly configured with valid credentials and has access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββ¨
β π kubernetes-cluster β zenml-test-cluster β
βββββββββββββββββββββββββ·βββββββββββββββββββββ
Calling the login CLI command will configure the local Kubernetes kubectl CLI to access the Kubernetes cluster through the GCP Service Connector:
zenml service-connector login gcp-user-account --resource-type kubernetes-cluster --resource-id zenml-test-cluster
Example Command Output
β ΄ Attempting to configure local client using service connector 'gcp-user-account'...
Context "gke_zenml-core_zenml-test-cluster" modified.
Updated local kubeconfig with the cluster details. The current kubectl context was set to 'gke_zenml-core_zenml-test-cluster'.
The 'gcp-user-account' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK.
To verify that the local Kubernetes kubectl CLI is correctly configured, the following command can be used:
kubectl cluster-info
Example Command Output
Kubernetes control plane is running at https://35.185.95.223
GLBCDefaultBackend is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 552 |
π§Installation
Installing ZenML and getting started.
ZenML is a Python package that can be installed directly via pip:
pip install zenml
Note that ZenML currently supports Python 3.8, 3.9, 3.10, and 3.11. Please make sure that you are using a supported Python version.
Install with the dashboard
ZenML comes bundled with a web dashboard that lives inside a sister repository. In order to get access to the dashboard locally, you need to launch the ZenML Server and Dashboard locally. For this, you need to install the optional dependencies for the ZenML Server:
pip install "zenml[server]"
We highly encourage you to install ZenML in a virtual environment. At ZenML, We like to use virtualenvwrapper or pyenv-virtualenv to manage our Python virtual environments.
Nightly builds
ZenML also publishes nightly builds under the zenml-nightly package name. These are built from the latest develop branch (to which work ready for release is published) and are not guaranteed to be stable. To install the nightly build, run:
pip install zenml-nightly
Verifying installations
Once the installation is completed, you can check whether the installation was successful either through Bash:
zenml version
or through Python:
import zenml
print(zenml.__version__)
If you would like to learn more about the current release, please visit our PyPi package page.
Running with Docker
zenml is also available as a Docker image hosted publicly on DockerHub. Use the following command to get started in a bash environment with zenml available:
docker run -it zenmldocker/zenml /bin/bash
If you would like to run the ZenML server with Docker:
docker run -it -d -p 8080:8080 zenmldocker/zenml-server
Deploying the server
Though ZenML can run entirely as a pip package on a local system, its advanced features are dependent on a centrally-deployed ZenML server accessible to other MLOps stack components. You can read more about it here. | getting-started | https://docs.zenml.io/v/docs/getting-started/installation | 433 |
aspyre/src/zenml/empty-connectors@zenml-core.json.Successfully registered service connector `gcp-impersonate-sa` with access to the following resources:
βββββββββββββββββ―βββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-bucket-sl β
βββββββββββββββββ·βββββββββββββββββββββββ
Short-lived credentials
This category of authentication methods uses temporary credentials explicitly configured in the Service Connector or generated by the Service Connector during auto-configuration. Of all available authentication methods, this is probably the least useful and you will likely never have to use it because it is terribly impractical: when short-lived credentials expire, Service Connectors become unusable and need to either be manually updated or replaced.
On the other hand, this authentication method is ideal if you're looking to grant someone else in your team temporary access to some resources without exposing your long-lived credentials.
A previous section described how temporary credentials can be automatically generated from other, long-lived credentials by most cloud provider Service Connectors. It only stands to reason that temporary credentials can also be generated manually by external means such as cloud provider CLIs and used directly to configure Service Connectors, or automatically generated during Service Connector auto-configuration.
This may be used as a way to grant an external party temporary access to some resources and have the Service Connector automatically become unusable (i.e. expire) after some time. Your long-lived credentials are kept safe, while the Service Connector only stores a short-lived credential.
The following is an example of using Service Connector auto-configuration to automatically generate a short-lived token from long-lived credentials configured for the local cloud provider CLI (AWS in this case): | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices | 405 |
gs = DockerSettings(python_package_installer="uv")@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
uv is a relatively new project and not as stable as pip yet, which might lead to errors during package installation. If this happens, try switching the installer back to pip and see if that solves the issue.
PreviousDocker settings on a step
NextUse your own Dockerfiles
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages | 90 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β ID β 9a810521-ef41-4e45-bb48-8569c5943dc6 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-implicit (s3-bucket | s3://sagemaker-studio-d8a14tvjsmb client) β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β secret-key β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ s3-bucket β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β s3://sagemaker-studio-d8a14tvjsmb β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 397 |
Amazon SageMaker
Executing individual steps in SageMaker.
SageMaker offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances.
When to use it
You should use the SageMaker step operator if:
one or more steps of your pipeline require computing resources (CPU, GPU, memory) that are not provided by your orchestrator.
you have access to SageMaker. If you're using a different cloud provider, take a look at the Vertex or AzureML step operators.
How to deploy it
Create a role in the IAM console that you want the jobs running in SageMaker to assume. This role should at least have the AmazonS3FullAccess and AmazonSageMakerFullAccess policies applied. Check here for a guide on how to set up this role.
Infrastructure Deployment
A Sagemaker step operator can be deployed directly from the ZenML CLI:
zenml orchestrator deploy sagemaker_step_operator --flavor=sagemaker --provider=aws ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
How to use it
To use the SageMaker step operator, we need:
The ZenML aws integration installed. If you haven't done so, runCopyzenml integration install aws
Docker installed and running.
An IAM role with the correct permissions. See the deployment section for detailed instructions.
An AWS container registry as part of our stack. Take a look here for a guide on how to set that up. | stack-components | https://docs.zenml.io/stack-components/step-operators/sagemaker | 364 |
r-registry connect dockerhub --connector dockerhubSuccessfully connected container registry `dockerhub` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌβββββββββββββββββ¨
β cf55339f-dbc8-4ee6-862e-c25aff411292 β dockerhub β π³ docker β π³ docker-registry β docker.io β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·βββββββββββββββββ
As a final step, you can use the Default Container Registry in a ZenML Stack:
# Register and set a stack with the new container registry
zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set
Linking the Default Container Registry to a Service Connector means that your local Docker client is no longer authenticated to access the remote registry. If you need to manually interact with the remote registry via the Docker CLI, you can use the local login Service Connector feature to temporarily authenticate your local Docker client to the remote registry:
zenml service-connector login <CONNECTOR_NAME>
Example Command Output
$ zenml service-connector login dockerhub
β Ή Attempting to configure local client using service connector 'dockerhub'...
WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
The 'dockerhub' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK.
For more information and a full list of configurable attributes of the Default container registry, check out the SDK Docs .
PreviousContainer Registries
NextDockerHub
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/container-registries/default | 516 |
Deploy with Docker
Deploying ZenML in a Docker container.
The ZenML server container image is available at zenmldocker/zenml-server and can be used to deploy ZenML with a container management or orchestration tool like Docker and docker-compose, or a serverless platform like Cloud Run, Container Apps, and more! This guide walks you through the various configuration options that the ZenML server container expects as well as a few deployment use cases.
Try it out locally first
If you're just looking for a quick way to deploy the ZenML server using a container, without going through the hassle of interacting with a container management tool like Docker and manually configuring your container, you can use the ZenML CLI to do so. You only need to have Docker installed and running on your machine:
zenml up --docker
This command deploys a ZenML server locally in a Docker container, then connects your client to it. Similar to running plain zenml up, the server and the local ZenML client share the same SQLite database.
The rest of this guide is addressed to advanced users who are looking to manually deploy and manage a containerized ZenML server.
ZenML server configuration options
If you're planning on deploying a custom containerized ZenML server yourself, you probably need to configure some settings for it like the database it should use, the default user details, and more. The ZenML server container image uses sensible defaults, so you can simply start a container without worrying too much about the configuration. However, if you're looking to connect the ZenML server to an external MySQL database or secrets management service, to persist the internal SQLite database, or simply want to control other settings like the default account, you can do so by customizing the container's environment variables.
The following environment variables can be passed to the container: | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 370 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β RESOURCE TYPES β π΅ gcp-generic, π¦ gcs-bucket, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β 4694de65-997b-4929-8831-b49d5e067b97 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β 59m46s β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-05-19 09:04:33.557126 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-05-19 09:04:33.557127 β
ββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
ββββββββββββββ―βββββββββββββ | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 456 |
Attach metadata to steps
You might want to log metadata and have that be attached to a specific step during the course of your work. This is possible by using the log_step_metadata method. This method allows you to attach a dictionary of key-value pairs as metadata to a step. The metadata can be any JSON-serializable value, including custom classes such as Uri, Path, DType, and StorageSize.
You can call this method from within a step or from outside. If you call it from within it will attach the metadata to the step and run that is currently being executed.
from zenml import step, log_step_metadata, ArtifactConfig, get_step_context
from typing import Annotated
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.base import ClassifierMixin
@step
def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier", is_model_artifact=True)]:
"""Train a model"""
# Fit the model and compute metrics
classifier = RandomForestClassifier().fit(dataset)
accuracy, precision, recall = ...
# Log metadata at the step level
# This associates the metadata with the ZenML step run
log_step_metadata(
metadata={
"evaluation_metrics": {
"accuracy": accuracy,
"precision": precision,
"recall": recall
},
return classifier
If you call it from outside you can attach the metadata to a specific step run from any pipeline and step. This is useful if you want to attach the metadata after you've run the step.
from zenml import log_step_metadata
# run some step
# subsequently log the metadata for the step
log_step_metadata(
metadata={
"some_metadata": {"a_number": 3}
},
pipeline_name_id_or_prefix="my_pipeline",
step_name="my_step",
run_id="my_step_run_id"
Fetching logged metadata
Once metadata has been logged in an artifact, model, we can easily fetch the metadata with the ZenML Client:
from zenml.client import Client
client = Client() | how-to | https://docs.zenml.io/how-to/track-metrics-metadata/attach-metadata-to-steps | 414 |
Connecting artifacts via a Model
Structuring an MLOps project
Now that we've learned about managing artifacts and models, we can shift our attention again to the thing that brings them together: Pipelines. This trifecta together will then inform how we structure our project.
In order to see the recommended repository structure of a ZenML MLOps project, read the best practices section.
An MLOps project can often be broken down into many different pipelines. For example:
A feature engineering pipeline that prepares raw data into a format ready to get trained.
A training pipeline that takes input data from a feature engineering pipeline and trains a models on it.
An inference pipeline that runs batch predictions on the trained model and often takes pre-processing from the training pipeline.
A deployment pipeline that deploys a trained model into a production endpoint.
The lines between these pipelines can often get blurry: Some use cases call for these pipelines to be merged into one big pipeline. Others go further and break the pipeline down into even smaller chunks. Ultimately, the decision of how to structure your pipelines depends on the use case and requirements of the project.
No matter how you design these pipelines, one thing stays consistent: you will often need to transfer or share information (in particular artifacts, models, and metadata) between pipelines. Here are some common patterns that you can use to help facilitate such an exchange:
Pattern 1: Artifact exchange between pipelines through Client
Let's say we have a feature engineering pipeline and a training pipeline. The feature engineering pipeline is like a factory, pumping out many different datasets. Only a few of these datasets should be selected to be sent to the training pipeline to train an actual model.
In this scenario, the ZenML Client can be used to facilitate such an exchange:
from zenml import pipeline
from zenml.client import Client
@pipeline | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/connecting-artifacts-via-a-model | 371 |
igure the local Generic Azure resource client/SDK.Stack Components use
The Azure Artifact Store Stack Component can be connected to a remote Azure blob storage container through an Azure Service Connector.
The Azure Service Connector can also be used with any Orchestrator or Model Deployer stack component flavor that relies on a Kubernetes clusters to manage workloads. This allows AKS Kubernetes container workloads to be managed without the need to configure and maintain explicit Azure or Kubernetes kubectl configuration contexts and credentials in the target environment or in the Stack Component itself.
Similarly, Container Registry Stack Components can be connected to a ACR Container Registry through an Azure Service Connector. This allows container images to be built and published to private ACR container registries without the need to configure explicit Azure credentials in the target environment or the Stack Component.
End-to-end examples
This is an example of an end-to-end workflow involving Service Connectors that uses a single multi-type Azure Service Connector to give access to multiple resources for multiple Stack Components. A complete ZenML Stack is registered composed of the following Stack Components, all connected through the same Service Connector:
a Kubernetes Orchestrator connected to an AKS Kubernetes cluster
a Azure Blob Storage Artifact Store connected to an Azure blob storage container
an Azure Container Registry connected to an ACR container registry
a local Image Builder
As a last step, a simple pipeline is run on the resulting Stack.
This example needs to use a remote ZenML Server that is reachable from Azure.
Configure an Azure service principal with a client secret and give it permissions to access an Azure blob storage container, an AKS Kubernetes cluster and an ACR container registry. Also make sure you have the Azure ZenML integration installed:Copyzenml integration install -y azure | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 348 |
HyperAI Service Connector
Configuring HyperAI Connectors to connect ZenML to HyperAI instances.
The ZenML HyperAI Service Connector allows authenticating with a HyperAI instance for deployment of pipeline runs. This connector provides pre-authenticated Paramiko SSH clients to Stack Components that are linked to it.
$ zenml service-connector list-types --type hyperai
βββββββββββββββββββββββββββββ―βββββββββββββ―βββββββββββββββββββββ―βββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β ββββββββββββββββββββββββββββΌβββββββββββββΌβββββββββββββββββββββΌβββββββββββββββΌββββββββΌβββββββββ¨
β HyperAI Service Connector β π€ hyperai β π€ hyperai-instance β rsa-key β β
β β
β
β β β β dsa-key β β β
β β β β ecdsa-key β β β
β β β β ed25519-key β β β
βββββββββββββββββββββββββββββ·βββββββββββββ·βββββββββββββββββββββ·βββββββββββββββ·ββββββββ·βββββββββ
Prerequisites
The HyperAI Service Connector is part of the HyperAI integration. It is necessary to install the integration in order to use this Service Connector:
zenml integration install hyperai installs the HyperAI integration
Resource Types
The HyperAI Service Connector supports HyperAI instances.
Authentication Methods
ZenML creates an SSH connection to the HyperAI instance in the background when using this Service Connector. It then provides these connections to stack components requiring them, such as the HyperAI Orchestrator. Multiple authentication methods are supported:
RSA key based authentication.
DSA (DSS) key based authentication.
ECDSA key based authentication.
ED25519 key based authentication. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/hyperai-service-connector | 473 |
AzureML
Executing individual steps in AzureML.
AzureML offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances.
When to use it
You should use the AzureML step operator if:
one or more steps of your pipeline require computing resources (CPU, GPU, memory) that are not provided by your orchestrator.
you have access to AzureML. If you're using a different cloud provider, take a look at the SageMaker or Vertex step operators.
How to deploy it
Create a Machine learning resource on Azure .
Once your resource is created, you can head over to the Azure Machine Learning Studio and create a compute cluster to run your pipelines.
Create an environment for your pipelines. Follow this guide to set one up.
(Optional) Create a Service Principal for authentication. This is required if you intend to run your pipelines with a remote orchestrator.
How to use it
To use the AzureML step operator, we need:
The ZenML azure integration installed. If you haven't done so, runCopyzenml integration install azure
An AzureML compute cluster and environment. See the deployment section for detailed instructions.
A remote artifact store as part of your stack. This is needed so that both your orchestration environment and AzureML can read and write step artifacts. Check out the documentation page of the artifact store you want to use for more information on how to set that up and configure authentication for it.
We can then register the step operator and use it in our active stack:
zenml step-operator register <NAME> \
--flavor=azureml \
--subscription_id=<AZURE_SUBSCRIPTION_ID> \
--resource_group=<AZURE_RESOURCE_GROUP> \
--workspace_name=<AZURE_WORKSPACE_NAME> \
--compute_target_name=<AZURE_COMPUTE_TARGET_NAME> \
--environment_name=<AZURE_ENVIRONMENT_NAME> \ | stack-components | https://docs.zenml.io/stack-components/step-operators/azureml | 404 |
o authenticate with your cloud provider of choice.We need first to install the SkyPilot integration for AWS and the AWS connectors extra, using the following two commands:
pip install "zenml[connectors-aws]"
zenml integration install aws skypilot_aws
To provision VMs on AWS, your VM Orchestrator stack component needs to be configured to authenticate with AWS Service Connector. To configure the AWS Service Connector, you need to register a new service connector configured with AWS credentials that have at least the minimum permissions required by SkyPilot as documented here.
First, check that the AWS service connector type is available using the following command:
zenml service-connector list-types --type aws
βββββββββββββββββββββββββ―βββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β ββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β AWS Service Connector β πΆ aws β πΆ aws-generic β implicit β β
β β β
β β β π¦ s3-bucket β secret-key β β β
β β β π kubernetes-cluster β sts-token β β β
β β β π³ docker-registry β iam-role β β β
β β β β session-token β β β
β β β β federation-token β β β
βββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββ·ββββββββ·βββββββββ
Next, configure a service connector using the CLI or the dashboard with the AWS credentials. For example, the following command uses the local AWS CLI credentials to auto-configure the service connector:
zenml service-connector register aws-skypilot-vm --type aws --region=us-east-1 --auto-configure | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/skypilot-vm | 517 |
e different components of the Vertex orchestrator:the ZenML client environment is the environment where you run the ZenML code responsible for building the pipeline Docker image and submitting the pipeline to Vertex AI, among other things. This is usually your local machine or some other environment used to automate running pipelines, like a CI/CD job. This environment needs to be able to authenticate with GCP and needs to have the necessary permissions to create a job in Vertex Pipelines, (e.g. the Vertex AI User role). If you are planning to run pipelines on a schedule, the ZenML client environment also needs additional permissions:the Storage Object Creator Role to be able to write the pipeline JSON file to the artifact store directly (NOTE: not needed if the Artifact Store is configured with credentials or is linked to Service Connector)
the Vertex AI pipeline environment is the GCP environment in which the pipeline steps themselves are running in GCP. The Vertex AI pipeline runs in the context of a GCP service account which we'll call here the workload service account. The workload service account can be explicitly configured in the orchestrator configuration via the workload_service_account parameter. If it is omitted, the orchestrator will use the Compute Engine default service account for the GCP project in which the pipeline is running. This service account needs to have the following permissions:permissions to run a Vertex AI pipeline, (e.g. the Vertex AI Service Agent role).
As you can see, there can be dedicated service accounts involved in running a Vertex AI pipeline. That's two service accounts if you also use a service account to authenticate to GCP in the ZenML client environment. However, you can keep it simple and use the same service account everywhere.
Configuration use-case: local gcloud CLI with user account | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/vertex | 355 |
ral options are presented.
hyperparameter tuning?Our dedicated documentation guide on implementing this is the place to learn more.
reset things when something goes wrong?
To reset your ZenML client, you can run zenml clean which will wipe your local metadata database and reset your client. Note that this is a destructive action, so feel free to reach out to us on Slack before doing this if you are unsure.
steps that create other steps AKA dynamic pipelines and steps?
Please read our general information on how to compose steps + pipelines together to start with. You might also find the code examples in our guide to implementing hyperparameter tuning which is related to this topic.
templates: using starter code with ZenML?
Project templates allow you to get going quickly with ZenML. We recommend the Starter template (starter) for most use cases which gives you a basic scaffold and structure around which you can write your own code. You can also build templates for others inside a Git repository and use them with ZenML's templates functionality.
upgrade my ZenML client and/or server?
Upgrading your ZenML client package is as simple as running pip install --upgrade zenml in your terminal. For upgrading your ZenML server, please refer to the dedicated documentation section which covers most of the ways you might do this as well as common troubleshooting steps.
use a <YOUR_COMPONENT_GOES_HERE> stack component?
For information on how to use a specific stack component, please refer to the component guide which contains all our tips and advice on how to use each integration and component with ZenML.
PreviousAPI reference
NextMigration guide
Last updated 15 days ago | reference | https://docs.zenml.io/reference/how-do-i | 327 |
πVisualizing artifacts
Configuring ZenML to display data visualizations in the dashboard.
It is easy to associate visualizations of data and artifacts in ZenML:
PreviousPassing artifacts between pipelines
NextDefault visualizations
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/visualize-artifacts | 53 |
a pipeline or a step level, or via a YAML config.Once you configure a pipeline this way, all artifacts generated during pipeline runs are automatically linked to the specified model. This connecting of artifacts provides lineage tracking and transparency into what data and models are used during training, evaluation, and inference.
from zenml import pipeline
from zenml import Model
model = Model(
# The name uniquely identifies this model
# It usually represents the business use case
name="iris_classifier",
# The version specifies the version
# If None or an unseen version is specified, it will be created
# Otherwise, a version will be fetched.
version=None,
# Some other properties may be specified
license="Apache 2.0",
description="A classification model for the iris dataset.",
# The step configuration will take precedence over the pipeline
@step(model=model)
def svc_trainer(...) -> ...:
...
# This configures it for all steps within the pipeline
@pipeline(model=model)
def training_pipeline(gamma: float = 0.002):
# Now this pipeline will have the `iris_classifier` model active.
X_train, X_test, y_train, y_test = training_data_loader()
svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train)
if __name__ == "__main__":
training_pipeline()
# In the YAML the same can be done; in this case, the
# passing to the decorators is not needed
# model:
# name: iris_classifier
# license: "Apache 2.0"
# description: "A classification model for the iris dataset."
The above will establish a link between all artifacts that pass through this ZenML pipeline and this model. This includes the technical model which is what comes out of the svc_trainer step. You will be able to see all associated artifacts and pipeline runs, all within one view.
Furthermore, this pipeline run and all other pipeline runs that are configured with this model configuration will be linked to this model as well.
You can see all versions of a model, and associated artifacts and run like this: | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/track-ml-models | 427 |
model.
How do you use it?
Deploy a logged modelFollowing MLflow's documentation, if we want to deploy a model as a local inference server, we need the model to be logged in the MLflow experiment tracker first. Once the model is logged, we can use the model URI either from the artifact path saved with the MLflow run or using model name and version if a model is registered in the MLflow model registry.
In the following examples, we will show how to deploy a model using the MLflow Model Deployer, in two different scenarios:
We already know the logged model URI and we want to deploy it as a local inference server.
from zenml import pipeline, step, get_step_context
from zenml.client import Client
@step
def deploy_model() -> Optional[MLFlowDeploymentService]:
# Deploy a model using the MLflow Model Deployer
zenml_client = Client()
model_deployer = zenml_client.active_stack.model_deployer
mlflow_deployment_config = MLFlowDeploymentConfig(
name: str = "mlflow-model-deployment-example",
description: str = "An example of deploying a model using the MLflow Model Deployer",
pipeline_name: str = get_step_context().pipeline_name,
pipeline_step_name: str = get_step_context().step_name,
model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>",
model_name: str = "model",
workers: int = 1
mlserver: bool = False
timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT
service = model_deployer.deploy_model(mlflow_deployment_config)
logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}")
return service
We don't know the logged model URI, since the model was logged in a previous step. We want to deploy the model as a local inference server. ZenML provides set of functionalities that would make it easier to get the model URI from the current run and deploy it.
from zenml import pipeline, step, get_step_context
from zenml.client import Client
from mlflow.tracking import MlflowClient, artifact_utils
@step | stack-components | https://docs.zenml.io/stack-components/model-deployers/mlflow | 447 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β π kubernetes-cluster β zenml-test-cluster β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β gcr.io/zenml-core β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββ
Scopes: multi-type, multi-instance, and single-instance
These terms are briefly explained in the Terminology section: you can register a Service Connector that grants access to multiple types of resources, to multiple instances of the same Resource Type, or to a single resource.
Service Connectors created from basic Service Connector Types like Kubernetes and Docker are single-resource by default, while Service Connectors used to connect to managed cloud resources like AWS and GCP can take all three forms.
The following example shows registering three different Service Connectors configured from the same AWS Service Connector Type using three different scopes but with the same credentials:
a multi-type AWS Service Connector that allows access to every possible resource accessible with the configured credentials
a multi-instance AWS Service Connector that allows access to multiple S3 buckets
a single-instance AWS Service Connector that only permits access to one S3 bucket
zenml service-connector register aws-multi-type --type aws --auto-configure
Example Command Output
β Registering service connector 'aws-multi-type'...
Successfully registered service connector `aws-multi-type` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://aws-ia-mwaa-715803424590 β | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 472 |
Migration guide 0.23.0 β 0.30.0
How to migrate from ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1.
Migrating to 0.30.0 performs non-reversible database changes so downgrading to <=0.23.0 is not possible afterwards. If you are running on an older ZenML version, please follow the 0.20.0 Migration Guide first to prevent unexpected database migration failures.
The ZenML 0.30.0 release removed the ml-pipelines-sdk dependency in favor of natively storing pipeline runs and artifacts in the ZenML database. The corresponding database migration will happen automatically as soon as you run any zenml ... CLI command after installing the new ZenML version, e.g.:
pip install zenml==0.30.0
zenml version # 0.30.0
PreviousMigration guide 0.13.2 β 0.20.0
NextMigration guide 0.39.1 β 0.41.0
Last updated 10 months ago | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-thirty | 235 |
use it
To use the Airflow orchestrator, we need:The ZenML airflow integration installed. If you haven't done so, runCopyzenml integration install airflow
Docker installed and running.
The orchestrator registered and part of our active stack:
zenml orchestrator register <ORCHESTRATOR_NAME> \
--flavor=airflow \
--local=True # set this to `False` if using a remote Airflow deployment
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
In the local case, we need to reinstall in a certain way for the local Airflow server:
pip install "apache-airflow-providers-docker<3.8.0" "apache-airflow==2.4.0" "pendulum<3.0.0" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.4.0/constraints-3.9.txt"
Please make sure to replace 3.9 with your Python (major) version in the constraints file URL given above.
Once that is installed, we can start the local Airflow server by running the following command in your terminal. See further below on an alternative way to set up the Airflow server manually since the zenml stack up command is deprecated.
zenml stack up
This command will start up an Airflow server on your local machine that's running in the same Python environment that you used to provision it. When it is finished, it will print a username and password which you can use to log in to the Airflow UI here.
As long as you didn't configure any custom value for the dag_output_dir attribute of your orchestrator, running a pipeline locally is as simple as calling:
python file_that_runs_a_zenml_pipeline.py
This call will produce a .zip file containing a representation of your ZenML pipeline to the Airflow DAGs directory. From there, the local Airflow server will load it and run your pipeline (It might take a few seconds until the pipeline shows up in the Airflow UI). | stack-components | https://docs.zenml.io/stack-components/orchestrators/airflow | 439 |
yMetricConfig.metric(DatasetDriftMetric)]
),
)As can be seen in the example, there are two basic ways of adding metrics to your Evidently report step configuration:
to add a single metric or metric preset: call EvidentlyMetricConfig.metric with an Evidently metric or metric preset class name (or class path or class). The rest of the parameters are the same ones that you would usually pass to the Evidently metric or metric preset class constructor.
to generate multiple metrics, similar to calling the Evidently column metric generator: call EvidentlyMetricConfig.metric_generator with an Evidently metric or metric preset class name (or class path or class) and a list of column names. The rest of the parameters are the same ones that you would usually pass to the Evidently metric or metric preset class constructor.
The ZenML Evidently report step can then be inserted into your pipeline where it can take in two datasets and outputs the Evidently report generated in both JSON and HTML formats, e.g.:
@pipeline(enable_cache=False, settings={"docker": docker_settings})
def text_data_report_test_pipeline():
"""Links all the steps together in a pipeline."""
data = data_loader()
reference_dataset, comparison_dataset = data_splitter(data)
report, _ = text_data_report(
reference_dataset=reference_dataset,
comparison_dataset=comparison_dataset,
test_report, _ = text_data_test(
reference_dataset=reference_dataset,
comparison_dataset=comparison_dataset,
text_analyzer(report)
text_data_report_test_pipeline()
For a version of the same step that works with a single dataset, simply don't pass any comparison dataset:
text_data_report(reference_dataset=reference_dataset)
You should consult the official Evidently documentation for more information on what each metric is useful for and what data columns it requires as input.
The evidently_report_step step also allows for additional Report options to be passed to the Report constructor e.g.:
from zenml.integrations.evidently.steps import (
EvidentlyColumnMapping, | stack-components | https://docs.zenml.io/stack-components/data-validators/evidently | 411 |
of the pod that was running the step that failed.Usually, the default log you see in your terminal is sufficient, in the event it's not, then it's useful to provide additional logs. Additional logs are not shown by default, you'll have to toggle an environment variable for it. Read the next section to find out how.
4.1 Additional logs
When the default logs are not helpful, ambiguous, or do not point you to the root of the issue, you can toggle the value of the ZENML_LOGGING_VERBOSITY environment variable to change the type of logs shown. The default value of ZENML_LOGGING_VERBOSITY environment variable is:
ZENML_LOGGING_VERBOSITY=INFO
You can pick other values such as WARN, ERROR, CRITICAL, DEBUG to change what's shown in the logs. And export the environment variable in your terminal. For example in Linux:
export ZENML_LOGGING_VERBOSITY=DEBUG
Read more about how to set environment variables for:
For Linux.
For macOS.
For Windows.
Client and server logs
When facing a ZenML Server-related issue, you can view the logs of the server to introspect deeper. To achieve this, run:
zenml logs
The logs from a healthy server should look something like this:
INFO:asyncio:Syncing pipeline runs...
2022-10-19 09:09:18,195 - zenml.zen_stores.metadata_store - DEBUG - Fetched 4 steps for pipeline run '13'. (metadata_store.py:315)
2022-10-19 09:09:18,359 - zenml.zen_stores.metadata_store - DEBUG - Fetched 0 inputs and 4 outputs for step 'importer'. (metadata_store.py:427)
2022-10-19 09:09:18,461 - zenml.zen_stores.metadata_store - DEBUG - Fetched 0 inputs and 4 outputs for step 'importer'. (metadata_store.py:427)
2022-10-19 09:09:18,516 - zenml.zen_stores.metadata_store - DEBUG - Fetched 2 inputs and 2 outputs for step 'normalizer'. (metadata_store.py:427)
2022-10-19 09:09:18,606 - zenml.zen_stores.metadata_store - DEBUG - Fetched 0 inputs and 4 outputs for step 'importer'. (metadata_store.py:427)
Most common errors
This section documents frequently encountered errors among users and solutions to each. | how-to | https://docs.zenml.io/v/docs/how-to/debug-and-solve-issues | 532 |
hat's described on the feast page: How to use it?.PreviousDevelop a Custom Model Registry
NextFeast
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/feature-stores | 30 |
f"{input_one} {input_two}"
print(combined_str)@pipeline
def my_pipeline():
output_step_one = step_1()
step_2(input_one="hello", input_two=output_step_one)
if __name__ == "__main__":
my_pipeline()Saving that to a run.py file and running it gives us:
Example Command Output
```text
$ python run.py
Reusing registered pipeline simple_pipeline (version: 1).
Building Docker image(s) for pipeline simple_pipeline.
Building Docker image gcr.io/zenml-core/zenml:simple_pipeline-orchestrator.
Including integration requirements: gcsfs, google-cloud-aiplatform>=1.11.0, google-cloud-build>=3.11.0, google-cloud-container>=2.21.0, google-cloud-functions>=1.8.3, google-cloud-scheduler>=2.7.3, google-cloud-secret-manager, google-cloud-storage>=2.9.0, kfp==1.8.16, kubernetes==18.20.0, shapely<2.0
No .dockerignore found, including all files inside build context.
Step 1/8 : FROM zenmldocker/zenml:0.39.1-py3.8
Step 2/8 : WORKDIR /app
Step 3/8 : COPY .zenml_integration_requirements .
Step 4/8 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_integration_requirements
Step 5/8 : ENV ZENML_ENABLE_REPO_INIT_WARNINGS=False
Step 6/8 : ENV ZENML_CONFIG_PATH=/app/.zenconfig
Step 7/8 : COPY . .
Step 8/8 : RUN chmod -R a+rw .
Pushing Docker image gcr.io/zenml-core/zenml:simple_pipeline-orchestrator.
Finished pushing Docker image.
Finished building Docker image(s).
Running pipeline simple_pipeline on stack gcp-demo (caching disabled)
Waiting for Kubernetes orchestrator pod...
Kubernetes orchestrator pod started.
Waiting for pod of step step_1 to start...
Step step_1 has started.
Step step_1 has finished in 1.357s.
Pod of step step_1 completed.
Waiting for pod of step simple_step_two to start...
Step step_2 has started.
Hello World!
Step step_2 has finished in 3.136s.
Pod of step step_2 completed.
Orchestration pod completed.
Dashboard URL: http://34.148.132.191/workspaces/default/pipelines/cec118d1-d90a-44ec-8bd7-d978f726b7aa/runs
``` | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 567 |
af89af ββ βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β azure-session-token β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β π¦ azure β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β access-token β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ azure-generic, π¦ blob-container, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β b34f2e95-ae16-43b6-8ab6-f0ee33dbcbd8 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β 42m25s β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 361 |
gs://zenml-bucket-sl
gs://zenml-core.appspot.comgs://zenml-core_cloudbuild
gs://zenml-datasets
Please select one or leave it empty to create a connector that can be used to access any of them []: gs://zenml-datasets
Successfully registered service connector `gcp-interactive` with access to the following resources:
βββββββββββββββββ―ββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-datasets β
βββββββββββββββββ·ββββββββββββββββββββββ
Regardless of how you came here, you should already have some idea of the following:
the type of resources that you want to connect ZenML to. This may be a Kubernetes cluster, a Docker container registry or an object storage service like AWS S3 or GCS.
the Service Connector implementation (i.e. Service Connector Type) that you want to use to connect to those resources. This could be one of the cloud provider Service Connector Types like AWS and GCP that provide access to a broader range of services, or one of the basic Service Connector Types like Kubernetes or Docker that only target a specific resource.
the credentials and authentication method that you want to use
Other questions that should be answered in this section:
are you just looking to connect a ZenML Stack Component to a single resource? or would you rather configure a wide-access ZenML Service Connector that gives ZenML and all its users access to a broader range of resource types and resource instances with a single set of credentials issued by your cloud provider?
have you already provisioned all the authentication prerequisites (e.g. service accounts, roles, permissions) and prepared the credentials you will need to configure the Service Connector? If you already have one of the cloud provider CLIs configured with credentials on your local host, you can easily use the Service Connector auto-configuration capabilities to get faster where you need to go. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 450 |
e - sets the secure header to the specified value.The following secure headers environment variables are supported:
ZENML_SERVER_SECURE_HEADERS_SERVER*: The Server HTTP header value used to identify the server. The default value is the ZenML server ID.
ZENML_SERVER_SECURE_HEADERS_HSTS: The Strict-Transport-Security HTTP header value. The default value is max-age=63072000; includeSubDomains.
ZENML_SERVER_SECURE_HEADERS_XFO: The X-Frame-Options HTTP header value. The default value is SAMEORIGIN.
ZENML_SERVER_SECURE_HEADERS_XXP: The X-XSS-Protection HTTP header value. The default value is 0. NOTE: this header is deprecated and should not be customized anymore. The Content-Security-Policy header should be used instead.
ZENML_SERVER_SECURE_HEADERS_CONTENT: The X-Content-Type-Options HTTP header value. The default value is nosniff.
ZENML_SERVER_SECURE_HEADERS_CSP: The Content-Security-Policy HTTP header value. This is by default set to a strict CSP policy that only allows content from the origins required by the ZenML dashboard. NOTE: customizing this header is discouraged, as it may cause the ZenML dashboard to malfunction.
ZENML_SERVER_SECURE_HEADERS_REFERRER: The Referrer-Policy HTTP header value. The default value is no-referrer-when-downgrade.
ZENML_SERVER_SECURE_HEADERS_CACHE: The Cache-Control HTTP header value. The default value is no-store, no-cache, must-revalidate.
ZENML_SERVER_SECURE_HEADERS_PERMISSIONS: The Permissions-Policy HTTP header value. The default value is accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=().
If you prefer to activate the server automatically during the initial deployment and also automate the creation of the initial admin user account, this legacy behavior can be brought back by setting the following environment variables:
ZENML_SERVER_AUTO_ACTIVATE: Set this to 1 to automatically activate the server and create the initial admin user account when the server is first deployed. Defaults to 0. | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker | 435 |
in our active stack. This can be done in two ways:If you have a Service Connector configured to access the remote Kubernetes cluster, you no longer need to set the kubernetes_context attribute to a local kubectl context. In fact, you don't need the local Kubernetes CLI at all. You can connect the stack component to the Service Connector instead:Copy$ zenml orchestrator register <ORCHESTRATOR_NAME> --flavor tekton
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully registered orchestrator `<ORCHESTRATOR_NAME>`.
$ zenml service-connector list-resources --resource-type kubernetes-cluster -e
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β e33c9fac-5daa-48b2-87bb-0187d3782cde β aws-iam-multi-eu β πΆ aws β π kubernetes-cluster β kubeflowmultitenant β
β β β β β zenbox β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 β aws-iam-multi-us β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β 1c54b32a-4889-4417-abbd-42d3ace3d03a β gcp-sa-multi β π΅ gcp β π kubernetes-cluster β zenml-test-cluster β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββ | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/tekton | 621 |
by running:
'zenml service-connector register -i'The second step is registering a Service Connector that effectively enables ZenML to authenticate to and access one or more remote resources. This step is best handled by someone with some infrastructure knowledge, but there are sane defaults and auto-detection mechanisms built into most Service Connectors that can make this a walk in the park even for the uninitiated. For our simple example, we're registering an AWS Service Connector with AWS credentials automatically lifted up from your local host, giving ZenML access to the same resources that you can access from your local machine through the AWS CLI.
This step assumes the AWS CLI is already installed and set up with credentials on your machine (e.g. by running aws configure).
zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket
Example Command Output
β Ό Registering service connector 'aws-s3'...
Successfully registered service connector `aws-s3` with access to the following resources:
βββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://aws-ia-mwaa-715803424590 β
β β s3://zenbytes-bucket β
β β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β β s3://zenml-public-datasets β
β β s3://zenml-public-swagger-spec β
βββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
The CLI validates and shows all S3 buckets that can be accessed with the auto-discovered credentials.
The ZenML CLI provides an interactive way of registering Service Connectors. Just use the -i command line argument and follow the interactive guide:
zenml service-connector register -i | how-to | https://docs.zenml.io/how-to/auth-management | 484 |
π€Deploying ZenML
Why do we need to deploy ZenML?
Moving your ZenML Server to a production environment offers several benefits over staying local:
Scalability: Production environments are designed to handle large-scale workloads, allowing your models to process more data and deliver faster results.
Reliability: Production-grade infrastructure ensures high availability and fault tolerance, minimizing downtime and ensuring consistent performance.
Collaboration: A shared production environment enables seamless collaboration between team members, making it easier to iterate on models and share insights.
Despite these advantages, transitioning to production can be challenging due to the complexities involved in setting up the needed infrastructure.
ZenML Server
When you first get started with ZenML, it relies with the following architecture on your machine.
The SQLite database that you can see in this diagram is used to store information about pipelines, pipeline runs, stacks, and other configurations. Users can run the zenml up command to spin up a local REST server to serve the dashboard. The diagram for this looks as follows:
In Scenario 2, the zenml up command implicitly connects the client to the server.
Currently the ZenML server supports a legacy and a brand-new version of the dashboard. To use the legacy version simply use the following command zenml up --legacy
In order to move into production, the ZenML server needs to be deployed somewhere centrally so that the different cloud stack components can read from and write to the server. Additionally, this also allows all your team members to connect to it and share stacks and pipelines.
Deploying a ZenML Server | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml | 315 |
nfiguration)
...
context.save_expectation_suite(expectation_suite=suite,
expectation_suite_name=expectation_suite_name,
context.build_data_docs()
return suite
The same approach must be used if you are using a Great Expectations configuration managed by ZenML and are using the Jupyter notebooks generated by the Great Expectations CLI.
Visualizing Great Expectations Suites and Results
You can view visualizations of the suites and results generated by your pipeline steps directly in the ZenML dashboard by clicking on the respective artifact in the pipeline run DAG.
Alternatively, if you are running inside a Jupyter notebook, you can load and render the suites and results using the artifact.visualize() method, e.g.:
from zenml.client import Client
def visualize_results(pipeline_name: str, step_name: str) -> None:
pipeline = Client().get_pipeline(pipeline_name)
last_run = pipeline.last_run
validation_step = last_run.steps[step_name]
validation_step.visualize()
if __name__ == "__main__":
visualize_results("validation_pipeline", "profiler")
visualize_results("validation_pipeline", "train_validator")
visualize_results("validation_pipeline", "test_validator")
PreviousData Validators
NextDeepchecks
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/data-validators/great-expectations | 255 |
initializing zenml at the root of your repository.If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually, it's better to not have to rely on this mechanism and initialize zenml at the root.
Afterward, you should see the new custom alerter flavor in the list of available alerter flavors:
zenml alerter flavor list
It is important to draw attention to when and how these abstractions are coming into play in a ZenML workflow.
The MyAlerterFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The MyAlerterConfig class is imported when someone tries to register/update a stack component with the my_alerter flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config objects are inherently pydantic objects, you can also add your own custom validators here.
The MyAlerter only comes into play when the component is ultimately in use.
The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the MyAlerterFlavor and the MyAlerterConfig are implemented in a different module/path than the actual MyAlerter).
PreviousSlack Alerter
NextImage Builders
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/alerters/custom | 310 |
Special Metadata Types
Tracking your metadata.
ZenML supports several special metadata types to capture specific kinds of information. Here are examples of how to use the special types Uri, Path, DType, and StorageSize:
from zenml.metadata.metadata_types import StorageSize, DType
from zenml import log_artifact_metadata
log_artifact_metadata(
metadata={
"dataset_source": Uri("gs://my-bucket/datasets/source.csv"),
"preprocessing_script": Path("/scripts/preprocess.py"),
"column_types": {
"age": DType("int"),
"income": DType("float"),
"score": DType("int")
},
"processed_data_size": StorageSize(2500000)
In this example:
Uri is used to indicate a dataset source URI.
Path is used to specify the filesystem path to a preprocessing script.
DType is used to describe the data types of specific columns.
StorageSize is used to indicate the size of the processed data in bytes.
These special types help standardize the format of metadata and ensure that it is logged in a consistent and interpretable manner.
PreviousGroup metadata
NextFetch metadata within steps
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/track-metrics-metadata/logging-metadata | 238 |
ng the server environment.
Execution EnvironmentsWhen running locally, there is no real concept of an execution environment as the client, server, and execution environment are all the same. However, when running a pipeline remotely, ZenML needs to transfer your code and environment over to the remote orchestrator. In order to achieve this, ZenML builds Docker images known as execution environments.
ZenML handles the Docker image configuration, creation, and pushing, starting with a base image containing ZenML and Python, then adding pipeline dependencies. To manage the Docker image configuration, follow the steps in the containerize your pipeline guide, including specifying additional pip dependencies, using a custom parent image, and customizing the build process.
The execution environments do not need to be built each time a pipeline is run - you can reuse builds from previous runs to save time.
Image Builder Environment
By default, execution environments are created locally in the client environment using the local Docker client. However, this requires Docker installation and permissions. ZenML offers image builders, a special stack component, allowing users to build and push Docker images in a different specialized image builder environment.
Note that even if you don't configure an image builder in your stack, ZenML still uses the local image builder to retain consistency across all builds. In this case, the image builder environment is the same as the client environment.
PreviousImplement a custom integration
NextHandling dependencies
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/configure-python-environments | 288 |
ent external location or secrets manager provider.To configure a backup secrets store in the Helm chart, use the same approach and instructions documented for the primary secrets store, but using the backupSecretsStore configuration section instead of secretsStore, e.g.:
zenml:
# ...
# Backup secrets store settings. This is used as a backup for the primary
# secrets store.
backupSecretsStore:
# Set to true to enable the backup secrets store.
enabled: true
# The type of the backup secrets store
type: aws
# Configuration for the AWS Secrets Manager backup secrets store
aws:
# The AWS Service Connector authentication method to use.
authMethod: secret-key
# The AWS Service Connector configuration.
authConfig:
# The AWS region to use. This must be set to the region where the AWS
# Secrets Manager service that you want to use is located.
region: us-east-1
# The AWS credentials to use to authenticate with the AWS Secrets
aws_access_key_id: <your AWS access key ID>
aws_secret_access_key: <your AWS secret access key>
Database backup and recovery
An automated database backup and recovery feature is enabled by default for all Helm deployments. The ZenML server will automatically back up the database before every upgrade and restore it if the upgrade fails in a way that affects the database.
The database backup automatically created by the ZenML server is only temporary and only used as an immediate recovery in case of database migration failures. It is not meant to be used as a long-term backup solution. If you need to back up your database for long-term storage, you should use a dedicated backup solution.
Several database backup strategies are supported, depending on where and how the backup is stored. The strategy can be configured by means of the zenml.database.backupStrategy Helm value:
disabled - no backup is performed | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm | 371 |
s/30267569827/locations/global/workloadIdentityP ββ β ools/mypool/providers/myprovider", β
β β "subject_token_type": "urn:ietf:params:aws:token-type:aws4_request", β
β β "service_account_impersonation_url": β
β β "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/myrole@ β
β β zenml-core.iam.gserviceaccount.com:generateAccessToken", β
β β "token_url": "https://sts.googleapis.com/v1/token", β
β β "credential_source": { β
β β "environment_id": "aws1", β
β β "region_url": β
β β "http://169.254.169.254/latest/meta-data/placement/availability-zone", β
β β "url": β
β β "http://169.254.169.254/latest/meta-data/iam/security-credentials", β
β β "regional_cred_verification_url": β
β β "https://sts.{region}.amazonaws.com?Action=GetCallerIdentity&Version=2011-06- β
β β 15" β
β β } β
β β } β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
GCP OAuth 2.0 token | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 401 |
Disable colorful logging
How to disable colorful logging in ZenML.
By default, ZenML uses colorful logging to make it easier to read logs. However, if you wish to disable this feature, you can do so by setting the following environment variable:
ZENML_LOGGING_COLORS_DISABLED=true
Note that setting this on the client environment (e.g. your local machine which runs the pipeline) will automatically disable colorful logging on remote pipeline runs. If you wish to only disable it locally, but turn on for remote pipeline runs, you can set the ZENML_LOGGING_COLORS_DISABLED environment variable in your pipeline runs environment as follows:
docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"})
# Either add it to the decorator
@pipeline(settings={"docker": docker_settings})
def my_pipeline() -> None:
my_step()
# Or configure the pipelines options
my_pipeline = my_pipeline.with_options(
settings={"docker": docker_settings}
PreviousDisable rich traceback output
NextHandle Data/Artifacts
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/control-logging/disable-colorful-logging | 214 |
pipeline again:
python run.py --training-pipelineNow you should notice the machine that gets provisioned on your cloud provider would have a different configuration as compared to last time. As easy as that!
Bear in mind that not every orchestrator supports ResourceSettings directly. To learn more, you can read about ResourceSettings here, including the ability to attach a GPU.
PreviousOrchestrate on the cloud
NextConfigure a code repository
Last updated 12 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/configure-pipeline | 93 |
βοΈManage stacks
Deploying your stack components directly from the ZenML CLI
The first step in running your pipelines on remote infrastructure is to deploy all the components that you would need, like an MLflow tracking server, a Seldon Core model deployer, and more to your cloud.
This can bring plenty of benefits like scalability, reliability, and collaboration. ZenML eases the path to production by providing a seamless way for all tools to interact with others through the use of abstractions. However, one of the most painful parts of this process, from what we see on our Slack and in general, is the deployment of these stack components.
Deploying and managing MLOps tools is tricky ππ΅βπ«
It is not trivial to set up all the different tools that you might need for your pipeline.
π Each tool comes with a certain set of requirements. For example, a Kubeflow installation will require you to have a Kubernetes cluster, and so would a Seldon Core deployment.
π€ Figuring out the defaults for infra parameters is not easy. Even if you have identified the backing infra that you need for a stack component, setting up reasonable defaults for parameters like instance size, CPU, memory, etc., needs a lot of experimentation to figure out.
π§ Many times, standard tool installations don't work out of the box. For example, to run a custom pipeline in Vertex AI, it is not enough to just run an imported pipeline. You might also need a custom service account that is configured to perform tasks like reading secrets from your secret store or talking to other GCP services that your pipeline might need.
π Some tools need an additional layer of installations to enable a more secure, production-grade setup. For example, a standard MLflow tracking server deployment comes without an authentication frontend which might expose all of your tracking data to the world if deployed as-is. | how-to | https://docs.zenml.io/how-to/stack-deployment | 392 |
Hyperparameter tuning
Running a hyperparameter tuning trial with ZenML.
Hyperparameter tuning is not yet a first-class citizen in ZenML, but it is (high up) on our roadmap of features and will likely receive first-class ZenML support soon. In the meanwhile, the following example shows how hyperparameter tuning can currently be implemented within a ZenML run.
A basic iteration through a number of hyperparameters can be achieved with ZenML by using a simple pipeline like this:
@pipeline
def my_pipeline(step_count: int) -> None:
data = load_data_step()
after = []
for i in range(step_count):
train_step(data, learning_rate=i * 0.0001, name=f"train_step_{i}")
after.append(f"train_step_{i}")
model = select_model_step(..., after=after)
This is an implementation of a basic grid search (across a single dimension) that would allow for a different learning rate to be used across the same train_step. Once that step has been run for all the different learning rates, the select_model_step finds which hyperparameters gave the best results or performance.
To set up the local environment used below, follow the recommendations from the Project templates.
pipelines/training.py, you will find a training pipeline with a
...
########## Hyperparameter tuning stage ##########
after = []
search_steps_prefix = "hp_tuning_search_"
for i, model_search_configuration in enumerate(
MetaConfig.model_search_space
):
step_name = f"{search_steps_prefix}{i}"
hp_tuning_single_search(
model_metadata=ExternalArtifact(
value=model_search_configuration,
),
id=step_name,
dataset_trn=dataset_trn,
dataset_tst=dataset_tst,
target=target,
after.append(step_name)
best_model_config = hp_tuning_select_best_model(
search_steps_prefix=search_steps_prefix, after=after
... | how-to | https://docs.zenml.io/how-to/build-pipelines/hyper-parameter-tuning | 381 |
ret Manager, Azure Key Vault, and Hashicorp Vault.Secrets are sensitive data that you don't want to store in your code or configure alongside your stacks and pipelines. ZenML includes a centralized secrets store that you can use to store and access your secrets securely.
Collaboration
Collaboration is a crucial aspect of any MLOps team as they often need to bring together individuals with diverse skills and expertise to create a cohesive and effective workflow for machine learning projects. A successful MLOps team requires seamless collaboration between data scientists, engineers, and DevOps professionals to develop, train, deploy, and maintain machine learning models.
With a deployed ZenML Server, users have the ability to create their own teams and project structures. They can easily share pipelines, runs, stacks, and other resources, streamlining the workflow and promoting teamwork.
Dashboard
When you start working with ZenML, you'll start with a local ZenML setup, and when you want to transition you will need to deploy ZenML. Don't worry though, there is a one-click way to do it which we'll learn about later.
VS Code Extension
ZenML also provides a VS Code extension that allows you to interact with your ZenML stacks, runs and server directly from your VS Code editor. If you're working on code in your editor, you can easily switch and inspect the stacks you're using, delete and inspect pipelines as well as even switch stacks.
PreviousInstallation
NextDeploying ZenML
Last updated 15 days ago | getting-started | https://docs.zenml.io/getting-started/core-concepts | 306 |
he ZenML Stack components to an external resource.If you are looking for a quick, assisted tour, we recommend using the interactive CLI mode to configure Service Connectors, especially if this is your first time doing it:
zenml service-connector register -i
zenml service-connector register -i
Example Command Output
Please enter a name for the service connector: gcp-interactive
Please enter a description for the service connector []: Interactive GCP connector example
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Available service connector types β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π Kubernetes Service Connector (connector type: kubernetes)
Authentication methods:
π password
π token
Resource types:
π kubernetes-cluster
Supports auto-configuration: True
Available locally: True
Available remotely: True
This ZenML Kubernetes service connector facilitates authenticating and
connecting to a Kubernetes cluster.
The connector can be used to access to any generic Kubernetes cluster by
providing pre-authenticated Kubernetes python clients to Stack Components that
are linked to it and also allows configuring the local Kubernetes CLI (i.e.
kubectl).
The Kubernetes Service Connector is part of the Kubernetes ZenML integration.
You can either install the entire integration or use a pypi extra to install it
independently of the integration:
pip install "zenml[connectors-kubernetes]" installs only prerequisites for the
Kubernetes Service Connector Type
zenml integration install kubernetes installs the entire Kubernetes ZenML
integration
A local Kubernetes CLI (i.e. kubectl ) and setting up local kubectl
configuration contexts is not required to access Kubernetes clusters in your
Stack Components through the Kubernetes Service Connector.
π³ Docker Service Connector (connector type: docker)
Authentication methods:
π password
Resource types:
π³ docker-registry | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 448 |
e --authentication_secret. For example, you'd run:zenml secret create argilla_secrets --api_key="<your_argilla_api_key>"
(Visit the Argilla documentation and interface to obtain your API key.)
Then register your annotator with ZenML:
zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets
When using a deployed instance of Argilla, the instance URL must be specified without any trailing / at the end. If you are using a Hugging Face Spaces instance and its visibility is set to private, you must also set the extra_headers parameter which would include a Hugging Face token. For example:
zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --extra_headers="{"Authorization": f"Bearer {<your_hugging_face_token>}"}"
Finally, add all these components to a stack and set it as your active stack. For example:
zenml stack copy default annotation
# this must be done separately so that the other required stack components are first registered
zenml stack update annotation -an <YOUR_ARGILLA_ANNOTATOR>
zenml stack set annotation
# optionally also
zenml stack describe
Now if you run a simple CLI command like zenml annotator dataset list this should work without any errors. You're ready to use your annotator in your ML workflow!
How do you use it?
ZenML supports access to your data and annotations via the zenml annotator ... CLI command. We have also implemented an interface to some of the common Argilla functionality via the ZenML SDK.
You can access information about the datasets you're using with the zenml annotator dataset list. To work on annotation for a particular dataset, you can run zenml annotator dataset annotate <dataset_name>. What follows is an overview of some key components to the Argilla integration and how it can be used.
Argilla Annotator Stack Component | stack-components | https://docs.zenml.io/stack-components/annotators/argilla | 418 |
or.local_docker": LocalDockerOrchestratorSettings(run_args={"cpu_count": 3}
@pipeline(settings=settings)
def simple_pipeline():
return_one()
Enabling CUDA for GPU-backed hardware
Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousLocal Orchestrator
NextKubeflow Orchestrator
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/local-docker | 118 |
on the HuggingFace documentation reference guide.After creating your Space, you'll notice a 'Building' status along with logs displayed on the screen. When this switches to 'Running', your Space is ready for use. If the ZenML login UI isn't visible, try refreshing the page.
In the upper-right hand corner of your space you'll see a button with three dots which, when you click on it, will offer you a menu option to "Embed this Space". (See the HuggingFace documentation for more details on this feature.) Copy the "Direct URL" shown in the box that you can now see on the screen. This should look something like this: https://<YOUR_USERNAME>-<SPACE_NAME>.hf.space. Open that URL and follow the instructions to initialize your ZenML server and set up an initial admin user account.
Connecting to your ZenML Server from your local machine
Once you have your ZenML server up and running, you can connect to it from your local machine. To do this, you'll need to get your Space's 'Direct URL' (see above).
Your Space's URL will only be available and usable for connecting from your local machine if the visibility of the space is set to 'Public'.
You can use the 'Direct URL' to connect to your ZenML server from your local machine with the following CLI command (after installing ZenML, and using your custom URL instead of the placeholder):
zenml connect --url '<YOUR_HF_SPACES_DIRECT_URL>'
You can also use the Direct URL in your browser to use the ZenML dashboard as a fullscreen application (i.e. without the HuggingFace Spaces wrapper around it).
The ZenML dashboard will currently not work when viewed from within the Huggingface webpage (i.e. wrapped in the main https://huggingface.co/... website). This is on account of a limitation in how cookies are handled between ZenML and Huggingface. You must view the dashboard from the 'Direct URL' (see above).
Extra configuration options | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-using-huggingface-spaces | 412 |
something with the annotations
return annotationsIf you're running in a cloud environment, you can manually export the annotations, store them somewhere in a cloud environment and then reference or use those within ZenML. The precise way you do this will be very case-dependent, however, so it's difficult to provide a one-size-fits-all solution.
Prodigy Annotator Stack Component
Our Prodigy annotator component inherits from the BaseAnnotator class. There are some methods that are core methods that must be defined, like being able to register or get a dataset. Most annotators handle things like the storage of state and have their own custom features, so there are quite a few extra methods specific to Prodigy.
The core Prodigy functionality that's currently enabled from within the annotator stack component interface includes a way to register your datasets and export any annotations for use in separate steps.
PreviousPigeon
NextDevelop a Custom Annotator
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/annotators/prodigy | 200 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β MODEL_URI β s3://zenprojects/seldon_model_deployer_step/output/884/seldon β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β PIPELINE_NAME β seldon_deployment_pipeline β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RUN_NAME β seldon_deployment_pipeline-11_Apr_22-09_39_27_648527 β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β PIPELINE_STEP_NAME β seldon_model_deployer_step β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β PREDICTION_URL β http://abb84c444c7804aa98fc8c097896479d-377673393.us-east-1.elb.amazonaws.com/seldon/β¦ β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SELDON_DEPLOYMENT β zenml-8cbe671b-9fce-4394-a051-68e001f92765 β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β STATUS β β
β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β STATUS_MESSAGE β Seldon Core deployment 'zenml-8cbe671b-9fce-4394-a051-68e001f92765' is available β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UUID β 8cbe671b-9fce-4394-a051-68e001f92765 β | stack-components | https://docs.zenml.io/stack-components/model-deployers | 431 |
needs to be able to read from the artifact store:The pod needs to be authenticated to push to the container registry in your active stack.
In case the parent image you use in your DockerSettings is stored in a private registry, the pod needs to be authenticated to pull from this registry.
If you configured your image builder to store the build context in the artifact store, the pod needs to be authenticated to read files from the artifact store storage.
ZenML is not yet able to handle setting all of the credentials of the various combinations of container registries and artifact stores on the Kaniko build pod, which is you're required to set this up yourself for now. The following section outlines how to handle it in the most straightforward (and probably also most common) scenario, when the Kubernetes cluster you're using for the Kaniko build is hosted on the same cloud provider as your container registry (and potentially the artifact store). For all other cases, check out the official Kaniko repository for more information.
Add permissions to push to ECR by attaching the EC2InstanceProfileForImageBuilderECRContainerBuilds policy to your EKS node IAM role.
Configure the image builder to set some required environment variables on the Kaniko build pod:
# register a new image builder with the environment variables
zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT> \
--env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]'
# or update an existing one
zenml image-builder update <NAME> \
--env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]'
Check out the Kaniko docs for more information.
Enable workload identity for your cluster
Follow the steps described here to create a Google service account, a Kubernetes service account as well as an IAM policy binding between them. | stack-components | https://docs.zenml.io/v/docs/stack-components/image-builders/kaniko | 420 |
he need to rerun unchanged parts of your pipeline.With ZenML, you can easily trace an artifact back to its origins and understand the exact sequence of executions that led to its creation, such as a trained model. This feature enables you to gain insights into the entire lineage of your artifacts, providing a clear understanding of how your data has been processed and transformed throughout your machine-learning pipelines. With ZenML, you can ensure the reproducibility of your results, and identify potential issues or bottlenecks in your pipelines. This level of transparency and traceability is essential for maintaining the reliability and trustworthiness of machine learning projects, especially when working in a team or across different environments.
For more details on how to adjust the names or versions assigned to your artifacts, assign tags to them, or adjust other artifact properties, see the documentation on artifact versioning and configuration.
By tracking the lineage of artifacts across environments and stacks, ZenML enables ML engineers to reproduce results and understand the exact steps taken to create a model. This is crucial for ensuring the reliability and reproducibility of machine learning models, especially when working in a team or across different environments.
Saving and Loading Artifacts with Materializers
Materializers play a crucial role in ZenML's artifact management system. They are responsible for handling the serialization and deserialization of artifacts, ensuring that data is consistently stored and retrieved from the artifact store. Each materializer stores data flowing through a pipeline in one or more files within a unique directory in the artifact store: | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/artifact-versioning | 303 |
a by comparing two datasets with identical schema.Target Drift reports and tests: helps detect and explore changes in the target function and/or model predictions by comparing two datasets where the target and/or prediction columns are available.
Regression Performance or Classification Performance reports and tests: evaluate the performance of a model by analyzing a single dataset where both the target and prediction columns are available. It can also compare it to the past performance of the same model, or the performance of an alternative model by providing a second dataset.
You should consider one of the other Data Validator flavors if you need a different set of data validation features.
How do you deploy it?
The Evidently Data Validator flavor is included in the Evidently ZenML integration, you need to install it on your local machine to be able to register an Evidently Data Validator and add it to your stack:
zenml integration install evidently -y
The Data Validator stack component does not have any configuration parameters. Adding it to a stack is as simple as running e.g.:
# Register the Evidently data validator
zenml data-validator register evidently_data_validator --flavor=evidently
# Register and set a stack with the new data validator
zenml stack register custom_stack -dv evidently_data_validator ... --set
How do you use it?
Data Profiling
Evidently's profiling functions take in a pandas.DataFrame dataset or a pair of datasets and generate results in the form of a Report object. | stack-components | https://docs.zenml.io/stack-components/data-validators/evidently | 294 |
Load artifacts into memory
Often ZenML pipeline steps consume artifacts produced by one another directly in the pipeline code, but there are scenarios where you need to pull external data into your steps. Such external data could be artifacts produced by non-ZenML codes. For those cases, it is advised to use ExternalArtifact, but what if we plan to exchange data created with other ZenML pipelines?
ZenML pipelines are first compiled and only executed at some later point. During the compilation phase, all function calls are executed, and this data is fixed as step input parameters. Given all this, the late materialization of dynamic objects, like data artifacts, is crucial. Without late materialization, it would not be possible to pass not-yet-existing artifacts as step inputs, or their metadata, which is often the case in a multi-pipeline setting.
We identify two major use cases for exchanging artifacts between pipelines:
You semantically group your data products using ZenML Models
You prefer to use ZenML Client to bring all the pieces together
We recommend using models to group and access artifacts across pipelines. Find out how to load an artifact from a ZenML Model here.
Use client methods to exchange artifacts
If you don't yet use the Model Control Plane, you can still exchange data between pipelines with late materialization. Let's rework the do_predictions pipeline code as follows:
from typing import Annotated
from zenml import step, pipeline
from zenml.client import Client
import pandas as pd
from sklearn.base import ClassifierMixin
@step
def predict(
model1: ClassifierMixin,
model2: ClassifierMixin,
model1_metric: float,
model2_metric: float,
data: pd.DataFrame,
) -> Annotated[pd.Series, "predictions"]:
# compare which model performs better on the fly
if model1_metric < model2_metric:
predictions = pd.Series(model1.predict(data))
else:
predictions = pd.Series(model2.predict(data))
return predictions
@step | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/load-artifacts-into-memory | 399 |
ct store `s3-zenfiles` to the following resources:ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β 4a550c82-aa64-4a48-9c7f-d5e127d77a44 β aws-multi-type β πΆ aws β π¦ s3-bucket β s3://zenfiles β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββ
The following is an example of connecting the same Stack Component to the remote resource using the interactive CLI mode:
zenml artifact-store connect s3-zenfiles -i
Example Command Output
The following connectors have compatible resources:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β 373a73c2-8295-45d4-a768-45f5a0f744ea β aws-multi-type β πΆ aws β π¦ s3-bucket β s3://zenfiles β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β fa9325ab-ce01-4404-aec3-61a3af395d48 β aws-s3-multi-instance β πΆ aws β π¦ s3-bucket β s3://zenfiles β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β 19edc05b-92db-49de-bc84-aa9b3fb8261a β aws-s3-zenfiles β πΆ aws β π¦ s3-bucket β s3://zenfiles β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββ
Please enter the name or ID of the connector you want to use: aws-s3-zenfiles
Successfully connected artifact store `s3-zenfiles` to the following resources: | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 755 |