page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
nfiguration files, workload or managed identities.This method may constitute a security risk, because it can give users access to the same cloud resources and services that the ZenML Server itself is configured to access. For this reason, all implicit authentication methods are disabled by default and need to be explicitly enabled by setting the ZENML_ENABLE_IMPLICIT_AUTH_METHODS environment variable or the helm chart enableImplicitAuthMethods configuration option to true in the ZenML deployment.
This authentication method doesn't require any credentials to be explicitly configured. It automatically discovers and uses credentials from one of the following sources:
environment variables
workload identity - if the application is deployed to an Azure Kubernetes Service with Managed Identity enabled. This option can only be used when running the ZenML server on an AKS cluster.
managed identity - if the application is deployed to an Azure host with Managed Identity enabled. This option can only be used when running the ZenML client or server on an Azure host.
Azure CLI - if a user has signed in via the Azure CLI az login command.
This is the quickest and easiest way to authenticate to Azure services. However, the results depend on how ZenML is deployed and the environment where it is used and is thus not fully reproducible:
when used with the default local ZenML deployment or a local ZenML server, the credentials are the same as those used by the Azure CLI or extracted from local environment variables.
when connected to a ZenML server, this method only works if the ZenML server is deployed in Azure and will use the workload identity attached to the Azure resource where the ZenML server is running (e.g. an AKS cluster). The permissions of the managed identity may need to be adjusted to allows listing and accessing/describing the Azure resources that the connector is configured to access. | how-to | https://docs.zenml.io/how-to/auth-management/azure-service-connector | 362 |
Implement a custom integration
Creating an external integration and contributing to ZenML
One of the main goals of ZenML is to find some semblance of order in the ever-growing MLOps landscape. ZenML already provides numerous integrations into many popular tools, and allows you to come up with ways to implement your own stack component flavors in order to fill in any gaps that are remaining.
However, what if you want to make your extension of ZenML part of the main codebase, to share it with others? If you are such a person, e.g., a tooling provider in the ML/MLOps space, or just want to contribute a tooling integration to ZenML, this guide is intended for you.
Step 1: Plan out your integration
In the previous page, we looked at the categories and abstractions that core ZenML defines. In order to create a new integration into ZenML, you would need to first find the categories that your integration belongs to. The list of categories can be found here as well.
Note that one integration may belong to different categories: For example, the cloud integrations (AWS/GCP/Azure) contain container registries, artifact stores etc.
Step 2: Create individual stack component flavors
Each category selected above would correspond to a stack component type. You can now start developing individual stack component flavors for this type by following the detailed instructions on the respective pages.
Before you package your new components into an integration, you may want to use/test them as a regular custom flavor. For instance, if you are developing a custom orchestrator and your flavor class MyOrchestratorFlavor is defined in flavors/my_flavor.py, you can register it by using:
zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor | how-to | https://docs.zenml.io/how-to/stack-deployment/implement-a-custom-integration | 365 |
et_historical_features(entity_dict, features)
...Note that ZenML's use of Pydantic to serialize and deserialize inputs stored in the ZenML metadata means that we are limited to basic data types. Pydantic cannot handle Pandas DataFrames, for example, or datetime values, so in the above code you can see that we have to convert them at various points.
For more information and a full list of configurable attributes of the Feast feature store, check out the SDK Docs .
PreviousFeature Stores
NextDevelop a Custom Feature Store
Last updated 8 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/feature-stores/feast | 115 |
g tests to your Evidently test step configuration:to add a single test or test preset: call EvidentlyTestConfig.test with an Evidently test or test preset class name (or class path or class). The rest of the parameters are the same ones that you would usually pass to the Evidently test or test preset class constructor.
to generate multiple tests, similar to calling the Evidently column test generator: call EvidentlyTestConfig.test_generator with an Evidently test or test preset class name (or class path or class) and a list of column names. The rest of the parameters are the same ones that you would usually pass to the Evidently test or test preset class constructor.
The ZenML Evidently test step can then be inserted into your pipeline where it can take in two datasets and outputs the Evidently test suite results generated in both JSON and HTML formats, e.g.:
@pipeline(enable_cache=False, settings={"docker": docker_settings})
def text_data_test_pipeline():
"""Links all the steps together in a pipeline."""
data = data_loader()
reference_dataset, comparison_dataset = data_splitter(data)
json_report, html_report = text_data_test(
reference_dataset=reference_dataset,
comparison_dataset=comparison_dataset,
text_data_test_pipeline()
For a version of the same step that works with a single dataset, simply don't pass any comparison dataset:
text_data_test(reference_dataset=reference_dataset)
You should consult the official Evidently documentation for more information on what each test is useful for and what data columns it requires as input.
The evidently_test_step step also allows for additional Test options to be passed to the TestSuite constructor e.g.:
from zenml.integrations.evidently.steps import (
EvidentlyColumnMapping,
text_data_test = evidently_test_step.with_options(
parameters=dict(
test_options = [
"evidently.options.ColorOptions", {
"primary_color": "#5a86ad",
"fill_color": "#fff4f2",
"zero_line_color": "#016795",
"current_data_color": "#c292a1",
"reference_data_color": "#017b92", | stack-components | https://docs.zenml.io/stack-components/data-validators/evidently | 437 |
Use your own Dockerfiles
In some cases, you might want full control over the resulting Docker image but want to build a Docker image dynamically each time a pipeline is executed. To make this process easier, ZenML allows you to specify a custom Dockerfile as well as build context directory and build options. ZenML then builds an intermediate image based on the Dockerfile you specified and uses the intermediate image as the parent image.
Here is how the build process looks like:
No Dockerfile specified: If any of the options regarding requirements, environment variables or copying files require us to build an image, ZenML will build this image. Otherwise the parent_image will be used to run the pipeline.
Dockerfile specified: ZenML will first build an image based on the specified Dockerfile. If any of the options regarding requirements, environment variables or copying files require an additional image built on top of that, ZenML will build a second image. If not, the image build from the specified Dockerfile will be used to run the pipeline.
Depending on the configuration of the DockerSettings object, requirements will be installed in the following order (each step optional):
The packages installed in your local Python environment.
The packages specified via the requirements attribute.
The packages specified via the required_integrations and potentially stack requirements.
Depending on the configuration of your Docker settings, this intermediate image might also be used directly to execute your pipeline steps.
docker_settings = DockerSettings(
dockerfile="/path/to/dockerfile",
build_context_root="/path/to/build/context",
build_options=...
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
PreviousSpecify pip dependencies and apt packages
NextWhich files are built into the image
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/customize-docker-builds/use-your-own-docker-files | 349 |
CTOR TYPE β RESOURCE TYPE β RESOURCE NAMES ββ βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββββββββ¨
β 405034fe-5e6e-4d29-ba62-8ae025381d98 β gcs-zenml-bucket-sl β π΅ gcp β π¦ gcs-bucket β gs://zenml-bucket-sl β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββββββββ
```
register and connect a Google Cloud Image Builder Stack Component to the target GCP project:Copyzenml image-builder register gcp-zenml-core --flavor gcp
Example Command Output
```text
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully registered image_builder `gcp-zenml-core`.
```
```sh
zenml image-builder connect gcp-zenml-core --connector gcp-cloud-builder-zenml-core
```
Example Command Output
```text
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully connected image builder `gcp-zenml-core` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββ¨
β 648c1016-76e4-4498-8de7-808fd20f057b β gcp-cloud-builder-zenml-core β π΅ gcp β π΅ gcp-generic β zenml-core β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ
``` | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 579 |
to your stack:
zenml integration install azure -yThe only configuration parameter mandatory for registering an Azure Artifact Store is the root path URI, which needs to point to an Azure Blog Storage container and take the form az://container-name or abfs://container-name. Please read the Azure Blob Storage documentation on how to configure an Azure Blob Storage container.
With the URI to your Azure Blob Storage container known, registering an Azure Artifact Store can be done as follows:
# Register the Azure artifact store
zenml artifact-store register az_store -f azure --path=az://container-name
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a az_store ... --set
Depending on your use case, however, you may also need to provide additional configuration parameters pertaining to authentication to match your deployment scenario.
Authentication Methods
Integrating and using an Azure Artifact Store in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Implicit Authentication method. However, the recommended way to authenticate to the Azure cloud platform is through an Azure Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the Azure Artifact Store with other remote stack components also running in Azure.
You will need the following information to configure Azure credentials for ZenML, depending on which type of Azure credentials you want to use:
an Azure connection string
an Azure account key
the client ID, client secret and tenant ID of the Azure service principal
For more information on how to retrieve information about your Azure Storage Account and Access Key or connection string, please refer to this Azure guide.
For information on how to configure an Azure service principal, please consult the Azure documentation. | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/azure | 356 |
gs://zenml-core_cloudbuild ββ β gs://zenml-datasets β
β β gs://zenml-internal-artifact-store β
β β gs://zenml-kubeflow-artifact-store β
β β gs://zenml-project-time-series-bucket β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenml-test-cluster β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β gcr.io/zenml-core β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββ
Long-lived credentials (API keys, account keys)
This is the magic formula of authentication methods. When paired with another ability, such as automatically generating short-lived API tokens, or impersonating accounts or assuming roles, this is the ideal authentication mechanism to use, particularly when using ZenML in production and when sharing results with other members of your ZenML team.
As a general best practice, but implemented particularly well for cloud platforms, account passwords are never directly used as a credential when authenticating to the cloud platform APIs. There is always a process in place that exchanges the account/password credential for another type of long-lived credential:
AWS uses the aws configure CLI command
GCP offers the gcloud auth application-default login CLI commands
Azure provides the az login CLI command
None of your original login information is stored on your local machine or used to access workloads. Instead, an API key, account key or some other form of intermediate credential is generated and stored on the local host and used to authenticate to remote cloud service APIs. | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 404 |
Retrieval evaluation
See how the retrieval component responds to changes in the pipeline.
The retrieval component of our RAG pipeline is responsible for finding relevant documents or document chunks to feed into the generation component. In this section we'll explore how to evaluate the performance of the retrieval component of your RAG pipeline. We're checking how accurate the semantic search is, or in other words how relevant the retrieved documents are to the query.
Our retrieval component takes the incoming query and converts it into a vector or embedded representation that can be used to search for relevant documents. We then use this representation to search through a corpus of documents and retrieve the most relevant ones.
Manual evaluation using handcrafted queries
The most naive and simple way to check this would be to handcraft some queries where we know the specific documents needed to answer it. We can then check if the retrieval component is able to retrieve these documents. This is a manual evaluation process and can be time-consuming, but it's a good way to get a sense of how well the retrieval component is working. It can also be useful to target known edge cases or difficult queries to see how the retrieval component handles those known scenarios.
Implementing this is pretty simple - you just need to create some queries and check the retrieved documents. Having tested the basic inference of our RAG setup quite a bit, there were some clear areas where the retrieval component could be improved. I looked in our documentation to find some examples where the information could only be found in a single page and then wrote some queries that would require the retrieval component to find that page. For example, the query "How do I get going with the Label Studio integration? What are the first steps?" would require the retrieval component to find the Label Studio integration page. Some of the other examples used are: | user-guide | https://docs.zenml.io/user-guide/llmops-guide/evaluation/retrieval | 362 |
unt as well as an IAM policy binding between them.Grant the Google service account permissions to push to your GCR registry and read from your GCP bucket.
Configure the image builder to run in the correct namespace and use the correct service account:
# register a new image builder with namespace and service account
zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT> \
--kubernetes_namespace=<KUBERNETES_NAMESPACE> \
--service_account_name=<KUBERNETES_SERVICE_ACCOUNT_NAME>
# --executor_args='["--compressed-caching=false", "--use-new-run=true"]'
# or update an existing one
zenml image-builder update <NAME> \
--kubernetes_namespace=<KUBERNETES_NAMESPACE> \
--service_account_name=<KUBERNETES_SERVICE_ACCOUNT_NAME>
Check out the Kaniko docs for more information.
Create a Kubernetes configmap for a Docker config that uses the Azure credentials helper:
kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }'
Follow these steps to configure your cluster to use a managed identity
Configure the image builder to mount the configmap in the Kaniko build pod:
# register a new image builder with the mounted configmap
zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT> \
--volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \
--volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]'
# --executor_args='["--compressed-caching=false", "--use-new-run=true"]'
# or update an existing one
zenml image-builder update <NAME> \
--volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \
--volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]'
Check out the Kaniko docs for more information.
Passing additional parameters to the Kaniko build
You can pass additional parameters to the Kaniko build by setting the executor_args attribute of the image builder. | stack-components | https://docs.zenml.io/v/docs/stack-components/image-builders/kaniko | 486 |
ZenMLServerSecretsStoreCreator \
--condition None# NOTE: use the GCP project NUMBER, not the project ID in the condition
gcloud projects add-iam-policy-binding <your GCP project ID> \
--member serviceAccount:<your GCP service account email> \
--role projects/<your GCP project ID>/roles/ZenMLServerSecretsStoreEditor \
--condition 'title=limit_access_zenml,description="Limit access to secrets with prefix zenml-",expression=resource.name.startsWith("projects/<your GCP project NUMBER>/secrets/zenml-")'
Example configuration for the GCP Secrets Store:
zenml:
# ...
# Secrets store settings. This is used to store centralized secrets.
secretsStore:
# Set to false to disable the secrets store.
enabled: true
# The type of the secrets store
type: gcp
# Configuration for the GCP Secrets Manager secrets store
gcp:
# The GCP Service Connector authentication method to use.
authMethod: service-account
# The GCP Service Connector configuration.
authConfig:
# The GCP project ID to use. This must be set to the project ID where the
# GCP Secrets Manager service that you want to use is located.
project_id: my-gcp-project
# GCP credentials JSON to use to authenticate with the GCP Secrets
# Manager instance.
google_application_credentials: |
"type": "service_account",
"project_id": "my-project",
"private_key_id": "...",
"private_key": "-----BEGIN PRIVATE KEY-----\n...=\n-----END PRIVATE KEY-----\n",
"client_email": "...",
"client_id": "...",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "..."
serviceAccount:
# If you're using workload identity, you need to annotate the service
# account with the GCP service account name (see https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity)
annotations:
iam.gke.io/gcp-service-account: <SERVICE_ACCOUNT_NAME>@<PROJECT_NAME>.iam.gserviceaccount.com | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm | 471 |
. See the deployment section for more information.The name of your Kubernetes context which points to your remote cluster. Run kubectl config get-contexts to see a list of available contexts. NOTE: this is no longer required if you are using a Service Connector to connect your Kubeflow Orchestrator Stack Component to the remote Kubernetes cluster.
A remote artifact store as part of your stack.
A remote container registry as part of your stack.
We can then register the orchestrator and use it in our active stack. This can be done in two ways:
If you have a Service Connector configured to access the remote Kubernetes cluster, you no longer need to set the kubernetes_context attribute to a local kubectl context. In fact, you don't need the local Kubernetes CLI at all. You can connect the stack component to the Service Connector instead:Copy$ zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubeflow
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully registered orchestrator `<ORCHESTRATOR_NAME>`. | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/kubeflow | 224 |
ment_uri", is_deployment_artifact=True)],
]:
...The ArtifactConfig object allows configuring model linkage directly on the artifact, and you specify whether it's for a model or deployment by using the is_model_artifact and is_deployment_artifact flags (as shown above) else it will be assumed to be a data artifact.
Saving intermediate artifacts
It is often handy to save some of your work half-way: steps like epoch-based training can be running slow, and you don't want to lose any checkpoints along the way if an error occurs. You can use the save_artifact utility function to save your data assets as ZenML artifacts. Moreover, if your step has the Model context configured in the @pipeline or @step decorator it will be automatically linked to it, so you can get easy access to it using the Model Control Plane features.
from zenml import step, Model
from zenml.artifacts.utils import save_artifact
import pandas as pd
from typing_extensions import Annotated
from zenml.artifacts.artifact_config import ArtifactConfig
@step(model=Model(name="MyModel", version="1.2.42"))
def trainer(
trn_dataset: pd.DataFrame,
) -> Annotated[
ClassifierMixin, ArtifactConfig("trained_model", is_model_artifact=True)
]: # this configuration will be applied to `model` output
"""Step running slow training."""
...
for epoch in epochs:
checkpoint = model.train(epoch)
# this will save each checkpoint in `training_checkpoint` artifact
# with distinct version e.g. `1.2.42_0`, `1.2.42_1`, etc.
# Checkpoint artifacts will be linked to `MyModel` version `1.2.42`
# implicitly.
save_artifact(
data=checkpoint,
name="training_checkpoint",
version=f"1.2.42_{epoch}",
...
return model
Link artifacts explicitly
If you would like to link an artifact to a model not from the step context or even outside a step, you can use the link_artifact_to_model function. All you need is ready to link artifact and the configuration of a model.
from zenml import step, Model, link_artifact_to_model, save_artifact
from zenml.client import Client | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/linking-model-binaries-data-to-models | 463 |
or Details
ββββββββββββββββββββ―βββββββββββββββββββ PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββ¨
β NAME β gcp-interactive β
β βββββββββββββββββββΌββββββββββββββββββ¨
β TYPE β π΅ gcp β
β βββββββββββββββββββΌββββββββββββββββββ¨
β AUTH METHOD β user-account β
β βββββββββββββββββββΌββββββββββββββββββ¨
β RESOURCE TYPES β π¦ gcs-bucket β
β βββββββββββββββββββΌββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌββββββββββββββββββ¨
β SHARED β β β
ββββββββββββββββββββ·ββββββββββββββββββ
Configuration
βββββββββββββββββββββ―βββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββββββΌβββββββββββββ¨
β project_id β zenml-core β
β ββββββββββββββββββββΌβββββββββββββ¨
β user_account_json β [HIDDEN] β
βββββββββββββββββββββ·βββββββββββββ
No labels are set for this service connector.
The service connector configuration has access to the following resources:
βββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://annotation-gcp-store β
β β gs://zenml-bucket-sl β
β β gs://zenml-core.appspot.com β
β β gs://zenml-core_cloudbuild β
β β gs://zenml-datasets β
βββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββ
Would you like to continue with the auto-discovered configuration or switch to manual ? (auto, manual) [auto]:
The following GCP GCS bucket instances are reachable through this connector:
gs://annotation-gcp-store
gs://zenml-bucket-sl
gs://zenml-core.appspot.com | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 651 |
ents locally!
Running a pipeline on a cloud stackNow that we have our remote artifact store registered, we can register a new stack with it, just like we did in the previous chapter:
zenml stack register local_with_remote_storage -o default -a cloud_artifact_store
Now, using the code from the previous chapter, we run a training pipeline:
Set our local_with_remote_storage stack active:
zenml stack set local_with_remote_storage
Let us continue with the example from the previous page and run the training pipeline:
python run.py --training-pipeline
When you run that pipeline, ZenML will automatically store the artifacts in the specified remote storage, ensuring that they are preserved and accessible for future runs and by your team members. You can ask your colleagues to connect to the same ZenML server, and you will notice that if they run the same pipeline, the pipeline would be partially cached, even if they have not run the pipeline themselves before.
You can list your artifact versions as follows:
# This will give you the artifacts from the last 15 minutes
zenml artifact version list --created="gte:$(date -d '15 minutes ago' '+%Y-%m-%d %H:%M:%S')"
ZenML Pro features an Artifact Control Plane to visualize artifact versions:
You will notice above that some artifacts are stored locally, while others are stored in a remote storage location.
By connecting remote storage, you're taking a significant step towards building a collaborative and scalable MLOps workflow. Your artifacts are no longer tied to a single machine but are now part of a cloud-based ecosystem, ready to be shared and built upon.
PreviousUnderstanding stacks
NextOrchestrate on the cloud
Last updated 12 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/remote-storage | 352 |
GCP Service Connector
Configuring GCP Service Connectors to connect ZenML to GCP resources such as GCS buckets, GKE Kubernetes clusters, and GCR container registries.
The ZenML GCP Service Connector facilitates the authentication and access to managed GCP services and resources. These encompass a range of resources, including GCS buckets, GCR container repositories, and GKE clusters. The connector provides support for various authentication methods, including GCP user accounts, service accounts, short-lived OAuth 2.0 tokens, and implicit authentication.
To ensure heightened security measures, this connector always issues short-lived OAuth 2.0 tokens to clients instead of long-lived credentials unless explicitly configured to do otherwise. Furthermore, it includes automatic configuration and detection of credentials locally configured through the GCP CLI.
This connector serves as a general means of accessing any GCP service by issuing OAuth 2.0 credential objects to clients. Additionally, the connector can handle specialized authentication for GCS, Docker, and Kubernetes Python clients. It also allows for the configuration of local Docker and Kubernetes CLIs.
$ zenml service-connector list-types --type gcp
βββββββββββββββββββββββββ―βββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β ββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β GCP Service Connector β π΅ gcp β π΅ gcp-generic β implicit β β
β β
β
β β β π¦ gcs-bucket β user-account β β β
β β β π kubernetes-cluster β service-account β β β
β β β π³ docker-registry β external-account β β β
β β β β oauth2-token β β β | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 469 |
r β zenhacks-cluster ββ ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
The Service Connector configuration shows long-lived credentials were lifted from the local environment and the AWS Session Token authentication method was configured:
zenml service-connector describe aws-session-token
Example Command Output
Service connector 'aws-session-token' of type 'aws' with id '3ae3e595-5cbc-446e-be64-e54e854e0e3f' is owned by user 'default' and is 'private'.
'aws-session-token' aws Service Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β c0f8e857-47f9-418b-a60f-c3b03023da54 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-session-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β session-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β πΆ aws-generic, π¦ s3-bucket, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 481 |
that might be useful for your particular use-case:configure_zenml_stores: if set, ZenML will automatically update the Great Expectation configuration to include Metadata Stores that use the Artifact Store as a backend. If neither context_root_dir nor context_config are set, this is the default behavior. You can set this flag to use the ZenML Artifact Store as a backend for Great Expectations with any of the deployment methods described above. Note that ZenML will not copy the information in your existing Great Expectations stores (e.g. Expectation Suites, Validation Results) in the ZenML Artifact Store. This is something that you will have to do yourself.
configure_local_docs: set this flag to configure a local Data Docs site where Great Expectations docs are generated and can be visualized locally. Use this in case you don't already have a local Data Docs site in your existing Great Expectations configuration.
For more, up-to-date information on the Great Expectations Data Validator configuration, you can have a look at the SDK docs .
How do you use it?
The core Great Expectations concepts that you should be aware of when using it within ZenML pipelines are Expectations / Expectation Suites, Validations and Data Docs.
ZenML wraps the Great Expectations' functionality in the form of two standard steps:
a Great Expectations data profiler that can be used to automatically generate Expectation Suites from an input pandas.DataFrame dataset
a Great Expectations data validator that uses an existing Expectation Suite to validate an input pandas.DataFrame dataset
You can visualize Great Expectations Suites and Results in Jupyter notebooks or view them directly in the ZenML dashboard.
The Great Expectation's data profiler step | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/great-expectations | 340 |
Develop a Custom Annotator
Learning how to develop a custom annotator.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Annotators are a stack component that enables the use of data annotation as part of your ZenML stack and pipelines. You can use the associated CLI command to launch annotation, configure your datasets and get stats on how many labeled tasks you have ready for use.
Base abstraction in progress!
We are actively working on the base abstraction for the annotators, which will be available soon. As a result, their extension is not possible at the moment. If you would like to use an annotator in your stack, please check the list of already available feature stores down below.
PreviousProdigy
NextModel Registries
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/annotators/custom | 190 |
le download inside the Docker container will fail.If you want to disable or enforce the downloading of files, check out this docs page for the available options.
PreviousReuse Docker builds to speed up Docker build times
NextBuild the pipeline without running
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/customize-docker-builds/use-code-repositories-to-speed-up-docker-build-times | 54 |
zenml orchestrator connect ${ORCHESTRATOR_NAME} -iHead on over to our docs to learn more about orchestrators and how to configure them.
Container Registry
export CONTAINER_REGISTRY_NAME=gcp_container_registry
zenml container-registry register ${CONTAINER_REGISTRY_NAME} --flavor=gcp --uri=<GCR-URI>
# Connect the GCS orchestrator to the target gcp project via a GCP Service Connector
zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i
Head on over to our docs to learn more about container registries and how to configure them.
7) Create Stack
export STACK_NAME=gcp_stack
zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} \
a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set
In case you want to also add any other stack components to this stack, feel free to do so.
And you're already done!
Just like that, you now have a fully working GCP stack ready to go. Feel free to take it for a spin by running a pipeline on it.
Cleanup
If you do not want to use any of the created resources in the future, simply delete the project you created.
gcloud project delete <PROJECT_ID_OR_NUMBER>
PreviousRun on AWS
NextKubeflow
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations/gcp-guide | 284 |
)
print(model.run_metadata["metadata_key"].value)For further depth, there is an advanced metadata logging guide that goes more into detail about logging metadata in ZenML.
Using the stages of a model
A model's versions can exist in various stages. These are meant to signify their lifecycle state:
staging: This version is staged for production.
production: This version is running in a production setting.
latest: The latest version of the model.
archived: This is archived and no longer relevant. This stage occurs when a model moves out of any other stage.
from zenml import Model
# Get the latest version of a model
model = Model(
name="iris_classifier",
version="latest"
# Get `my_version` version of a model
model = Model(
name="iris_classifier",
version="my_version",
# Pass the stage into the version field
# to get the `staging` model
model = Model(
name="iris_classifier",
version="staging",
# This will set this version to production
model.set_stage(stage="production", force=True)
# List staging models
zenml model version list <MODEL_NAME> --stage staging
# Update to production
zenml model version update <MODEL_NAME> <MODEL_VERSIONNAME> -s production
The ZenML Pro dashboard has additional capabilities, that include easily changing the stage:
ZenML Model and versions are some of the most powerful features in ZenML. To understand them in a deeper way, read the dedicated Model Management guide.
PreviousManage artifacts
NextA starter project
Last updated 19 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/track-ml-models | 322 |
tional) Which Metadata to Extract for the ArtifactOptionally, you can override the extract_metadata() method to track custom metadata for all artifacts saved by your materializer. Anything you extract here will be displayed in the dashboard next to your artifacts.
src.zenml.metadata.metadata_types that are displayed in a dedicated way in the dashboard. See
src.zenml.metadata.metadata_types.MetadataType for more details.
By default, this method will only extract the storage size of an artifact, but you can overwrite it to track anything you wish. E.g., the zenml.materializers.NumpyMaterializer overwrites this method to track the shape, dtype, and some statistical properties of each np.ndarray that it saves.
If you would like to disable artifact metadata extraction altogether, you can set enable_artifact_metadata at either pipeline or step level via @pipeline(enable_artifact_metadata=False) or @step(enable_artifact_metadata=False).
Skipping materialization
Skipping materialization might have unintended consequences for downstream tasks that rely on materialized artifacts. Only skip materialization if there is no other way to do what you want to do.
While materializers should in most cases be used to control how artifacts are returned and consumed from pipeline steps, you might sometimes need to have a completely unmaterialized artifact in a step, e.g., if you need to know the exact path to where your artifact is stored.
An unmaterialized artifact is a zenml.materializers.UnmaterializedArtifact. Among others, it has a property uri that points to the unique path in the artifact store where the artifact is persisted. One can use an unmaterialized artifact by specifying UnmaterializedArtifact as the type in the step:
from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact
from zenml import step
@step
def my_step(my_artifact: UnmaterializedArtifact): # rather than pd.DataFrame
pass
Example | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types | 387 |
Kubernetes Orchestrator
Orchestrating your pipelines to run on Kubernetes clusters.
Using the ZenML kubernetes integration, you can orchestrate and scale your ML pipelines on a Kubernetes cluster without writing a single line of Kubernetes code.
This Kubernetes-native orchestrator is a minimalist, lightweight alternative to other distributed orchestrators like Airflow or Kubeflow.
Overall, the Kubernetes orchestrator is quite similar to the Kubeflow orchestrator in that it runs each pipeline step in a separate Kubernetes pod. However, the orchestration of the different pods is not done by Kubeflow but by a separate master pod that orchestrates the step execution via topological sort.
Compared to Kubeflow, this means that the Kubernetes-native orchestrator is faster and much simpler to start with since you do not need to install and maintain Kubeflow on your cluster. The Kubernetes-native orchestrator is an ideal choice for teams new to distributed orchestration that do not want to go with a fully-managed offering.
However, since Kubeflow is much more mature, you should, in most cases, aim to move your pipelines to Kubeflow in the long run. A smooth way to production-grade orchestration could be to set up a Kubernetes cluster first and get started with the Kubernetes-native orchestrator. If needed, you can then install and set up Kubeflow later and simply switch out the orchestrator of your stack as soon as your full setup is ready.
This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the Kubernetes orchestrator if:
you're looking lightweight way of running your pipelines on Kubernetes.
you don't need a UI to list all your pipeline runs.
you're not willing to maintain Kubeflow Pipelines on your Kubernetes cluster.
you're not interested in paying for managed solutions like Vertex.
How to deploy it | stack-components | https://docs.zenml.io/stack-components/orchestrators/kubernetes | 397 |
Track ML models
Creating a full picture of a ML model using the Model Control Plane
As discussed in the Core Concepts, ZenML also contains the notion of a Model, which consists of many model versions (the iterations of the model). These concepts are exposed in the Model Control Plane (MCP for short).
What is a ZenML Model?
Before diving in, let's take some time to build an understanding of what we mean when we say Model in ZenML terms. A Model is simply an entity that groups pipelines, artifacts, metadata, and other crucial business data into a unified entity. In this sense, a ZenML Model is a concept that more broadly encapsulates your ML product's business logic. You may even think of a ZenML Model as a "project" or a "workspace"
Please note that one of the most common artifacts that is associated with a Model in ZenML is the so-called technical model, which is the actually model file/files that holds the weight and parameters of a machine learning training result. However, this is not the only artifact that is relevant; artifacts such as the training data and the predictions this model produces in production are also linked inside a ZenML Model.
Models are first-class citizens in ZenML and as such viewing and using them is unified and centralized in the ZenML API, the ZenML client as well as on the ZenML Pro dashboard.
These models can be viewed within ZenML:
zenml model list can be used to list all models.
The ZenML Pro dashboard has additional capabilities, that include visualizing these models in the dashboard.
Configuring a model in a pipeline
The easiest way to use a ZenML model is to pass a Model object as part of a pipeline run. This can be done easily at a pipeline or a step level, or via a YAML config. | user-guide | https://docs.zenml.io/user-guide/starter-guide/track-ml-models | 370 |
last_run = training_pipeline()
print(last_run.id)# You can also use the class directly with the `model` object
last_run = training_pipeline.model.last_run
print(last_run.id)
# OR you can fetch it after execution is finished:
pipeline = Client().get_pipeline("training_pipeline")
last_run = pipeline.last_run
print(last_run.id)
# You can now fetch the model
trainer_step = last_run.steps["svc_trainer"]
model = trainer_step.outputs["trained_model"].load()
PreviousAccess secrets in a step
NextGet past pipeline/step runs
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/build-pipelines/fetching-pipelines | 123 |
re stacks and pipelines.
Deploying a ZenML ServerDeploying the ZenML Server is a crucial step towards transitioning to a production-grade environment for your machine learning projects. By setting up a deployed ZenML Server instance, you gain access to powerful features, allowing you to use stacks with remote components, centrally track progress, collaborate effectively, and achieve reproducible results.
Currently, there are two main options to access a deployed ZenML server:
SaaS: With the Cloud offering you can utilize a control plane to create ZenML servers, also known as tenants. These tenants are managed and maintained by ZenML's dedicated team, alleviating the burden of server management from your end. Importantly, your data remains securely within your stack, and ZenML's role is primarily to handle tracking of metadata and server maintenance.
Self-hosted Deployment: Alternatively, you have the ability to deploy ZenML on your own self-hosted environment. This can be achieved through various methods, including using our CLI, Docker, Helm, or HuggingFace Spaces. We also offer our Pro version for self-hosted deployments, so you can use our full paid feature-set while staying fully in control with an airgapped solution on your infrastructure.
Currently the ZenML server supports a legacy and a brand-new version of the dashboard. To use the legacy version which supports stack registration from the dashboard simply set the following environment variable in the deployment environment: export ZEN_SERVER_USE_LEGACY_DASHBOARD=True.
Both options offer distinct advantages, allowing you to choose the deployment approach that best aligns with your organization's needs and infrastructure preferences. Whichever path you select, ZenML facilitates a seamless and efficient way to take advantage of the ZenML Server and enhance your machine learning workflows for production-level success.
Choose the most appropriate deployment strategy for you out of the following options to get started with the deployment: | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml | 375 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β 4a550c82-aa64-4a48-9c7f-d5e127d77a44 β aws-multi-type β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
zenml service-connector login gcp-user-multi --resource-type kubernetes-cluster --resource-id zenml-test-cluster
Example Command Output
$ zenml service-connector login gcp-user-multi --resource-type kubernetes-cluster --resource-id zenml-test-cluster
β Attempting to configure local client using service connector 'gcp-user-multi'...
Updated local kubeconfig with the cluster details. The current kubectl context was set to 'gke_zenml-core_zenml-test-cluster'.
The 'gcp-user-multi' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK.
# Verify that the local kubectl client is now configured to access the remote Kubernetes cluster
$ kubectl cluster-info
Kubernetes control plane is running at https://35.185.95.223
GLBCDefaultBackend is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
zenml service-connector login aws-multi-type --resource-type kubernetes-cluster --resource-id zenhacks-cluster
Example Command Output
$ zenml service-connector login aws-multi-type --resource-type kubernetes-cluster --resource-id zenhacks-cluster
β Attempting to configure local client using service connector 'aws-multi-type'... | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 503 |
idle clusters, preventing unnecessary cloud costs.You can configure the SkyPilot VM Orchestrator to use a specific VM type, and resources for each step of your pipeline can be configured individually. Read more about how to configure step-specific resources here.
The SkyPilot VM Orchestrator does not currently support the ability to schedule pipelines runs
All ZenML pipeline runs are executed using Docker containers within the VMs provisioned by the orchestrator. For that reason, you may need to configure your pipeline settings with docker_run_args=["--gpus=all"] to enable GPU support in the Docker container.
How to deploy it
You don't need to do anything special to deploy the SkyPilot VM Orchestrator. As the SkyPilot integration itself takes care of provisioning VMs, you can simply use the orchestrator as you would any other ZenML orchestrator. However, you will need to ensure that you have the appropriate permissions to provision VMs on your cloud provider of choice and to configure your SkyPilot orchestrator accordingly using the service connectors feature.
The SkyPilot VM Orchestrator currently only supports the AWS, GCP, and Azure cloud platforms.
How to use it
To use the SkyPilot VM Orchestrator, you need:
One of the SkyPilot integrations installed. You can install the SkyPilot integration for your cloud provider of choice using the following command:Copy # For AWS
pip install "zenml[connectors-aws]"
zenml integration install aws skypilot_aws
# for GCP
pip install "zenml[connectors-gcp]"
zenml integration install gcp skypilot_gcp # for GCP
# for Azure
pip install "zenml[connectors-azure]"
zenml integration install azure skypilot_azure # for Azure
Docker installed and running.
A remote artifact store as part of your stack.
A remote container registry as part of your stack.
A remote ZenML deployment.
The appropriate permissions to provision VMs on your cloud provider of choice.
A service connector configured to authenticate with your cloud provider of choice. | stack-components | https://docs.zenml.io/stack-components/orchestrators/skypilot-vm | 438 |
to interact with the remote model server include:deploy_model - Deploys a model to the serving environment and returns a Service object that represents the deployed model server. find_model_server - Finds and returns a list of Service objects that represent model servers that have been deployed to the serving environment, the services are stored in the DB and can be used as a reference to know what and where the model is deployed. stop_model_server - Stops a model server that is currently running in the serving environment. start_model_server - Starts a model server that has been stopped in the serving environment. delete_model_server - Deletes a model server from the serving environment and from the DB.
ZenML uses the Service object to represent a model server that has been deployed to a serving environment. The Service object is saved in the DB and can be used as a reference to know what and where the model is deployed. The Service object consists of 2 main attributes, the config and the status. The config attribute holds all the deployment configuration attributes required to create a new deployment, while the status attribute holds the operational status of the deployment, such as the last error message, the prediction URL, and the deployment status.
from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer
model_deployer = HuggingFaceModelDeployer.get_active_model_deployer()
services = model_deployer.find_model_server(
pipeline_name="LLM_pipeline",
pipeline_step_name="huggingface_model_deployer_step",
model_name="LLAMA-7B",
if services:
if services[0].is_running:
print(
f"Model server {services[0].config['model_name']} is running at {services[0].status['prediction_url']}"
else:
print(f"Model server {services[0].config['model_name']} is not running")
model_deployer.start_model_server(services[0])
else:
print("No model server found")
service = model_deployer.deploy_model(
pipeline_name="LLM_pipeline",
pipeline_step_name="huggingface_model_deployer_step", | stack-components | https://docs.zenml.io/stack-components/model-deployers | 419 |
onent to a remote resource via a Service ConnectorThe following operations are only possible with Service Connector Types that are locally available (with some notable exceptions covered in the information box that follows):
Service Connector auto-configuration and discovery of credentials stored by a local client, CLI, or SDK (e.g. aws or kubectl).
Using the configuration and credentials managed by a Service Connector to configure a local client, CLI, or SDK (e.g. docker or kubectl).
Running pipelines with a Stack Component that is connected to a remote resource through a Service Connector
One interesting and useful byproduct of the way cloud provider Service Connectors are designed is the fact that you don't need to have the cloud provider Service Connector Type available client-side to be able to access some of its resources. Take the following situation for example:
the GCP Service Connector Type can provide access to GKE Kubernetes clusters and GCR Docker container registries.
however, you don't need the GCP Service Connector Type or any GCP libraries to be installed on the ZenML clients to connect to and use those Kubernetes clusters or Docker registries in your ML pipelines.
the Kubernetes Service Connector Type is enough to access any Kubernetes cluster, regardless of its provenance (AWS, GCP, etc.)
the Docker Service Connector Type is enough to access any Docker container registry, regardless of its provenance (AWS, GCP, etc.)
Register Service Connectors
When you reach this section, you probably already made up your mind about the type of infrastructure or cloud provider that you want to use to run your ZenML pipelines after reading through the Service Connector Types section, and you probably carefully weighed your choices of authentication methods and best security practices. Either that or you simply want to quickly try out a Service Connector to connect one of the ZenML Stack components to an external resource. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 368 |
r the local cloud provider CLI (AWS in this case):AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token
Example Command Output
β Έ Registering service connector 'aws-sts-token'...
Successfully registered service connector `aws-sts-token` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β β s3://zenml-public-datasets β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
The Service Connector is now configured with a short-lived token that will expire after some time. You can verify this by inspecting the Service Connector:
zenml service-connector describe aws-sts-token
Example Command Output
Service connector 'aws-sts-token' of type 'aws' with id '63e14350-6719-4255-b3f5-0539c8f7c303' is owned by user 'default' and is 'private'.
'aws-sts-token' aws Service Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 547 |
π¦ blob-container β az://demo-zenmlartifactstore βββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββ·βββββββββββββββββββββββββββββββ
After having set up or decided on an Azure Service Connector to use to connect to the target Azure Blob storage container, you can register the Azure Artifact Store as follows:
# Register the Azure artifact-store and reference the target blob storage container
zenml artifact-store register <AZURE_STORE_NAME> -f azure \
--path='az://your-container'
# Connect the Azure artifact-store to the target container via an Azure Service Connector
zenml artifact-store connect <AZURE_STORE_NAME> -i
A non-interactive version that connects the Azure Artifact Store to a target blob storage container through an Azure Service Connector:
zenml artifact-store connect <S3_STORE_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml artifact-store connect azure-blob-demo --connector azure-blob-demo
Successfully connected artifact store `azure-blob-demo` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββ―βββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββββββββββββββ¨
β f6b329e1-00f7-4392-94c9-264119e672d0 β azure-blob-demo β π¦ azure β π¦ blob-container β az://demo-zenmlartifactstore β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββ·βββββββββββββββββββββββββββββββ
As a final step, you can use the Azure Artifact Store in a ZenML Stack:
# Register and set a stack with the new artifact store
zenml stack register <STACK_NAME> -a <AZURE_STORE_NAME> ... --set | stack-components | https://docs.zenml.io/stack-components/artifact-stores/azure | 584 |
)
print(model.run_metadata["metadata_key"].value)PreviousTrack metrics and metadata
NextAttach metadata to an artifact
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/track-metrics-metadata/attach-metadata-to-a-model | 30 |
68.0.12 β β β default β β βββββββββββ·βββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββ·βββββββββ
Cloud provider Service Connector Types
Cloud service providers like AWS, GCP and Azure implement one or more authentication schemes that are unified across a wide range or resources and services, all managed under the same umbrella. This allows users to access many different resources with a single set of authentication credentials. Some authentication methods are straightforward to set up, but are only meant to be used for development and testing. Other authentication schemes are powered by extensive roles and permissions management systems and are targeted at production environments where security and operations at scale are big concerns. The corresponding cloud provider Service Connector Types are designed accordingly:
they support multiple types of resources (e.g. Kubernetes clusters, Docker registries, a form of object storage)
they usually include some form of "generic" Resource Type that can be used by clients to access types of resources that are not yet part of the supported set. When this generic Resource Type is used, clients and Stack Components that access the connector are provided some form of generic session, credentials or client that can be used to access any of the cloud provider resources. For example, in the AWS case, clients accessing the aws-generic Resource Type are issued a pre-authenticated boto3 Session object that can be used to access any AWS service. | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 366 |
={
"count": len(docs_urls),
},
return docs_urlsThe get_all_pages function simply crawls our documentation website and retrieves a unique set of URLs. We've limited it to only scrape the documentation relating to the most recent releases so that we're not mixing old syntax and information with the new. This is a simple way to ensure that we're only ingesting the most relevant and up-to-date information into our pipeline.
We also log the count of those URLs as metadata for the step output. This will be visible in the dashboard for extra visibility around the data that's being ingested. Of course, you can also add more complex logic to this step, such as filtering out certain URLs or adding more metadata.
Once we have our list of URLs, we use the unstructured library to load and parse the pages. This will allow us to use the text without having to worry about the details of the HTML structure and/or markup. This specifically helps us keep the text content as small as possible since we are operating in a constrained environment with LLMs.
from typing import List
from unstructured.partition.html import partition_html
from zenml import step
@step
def web_url_loader(urls: List[str]) -> List[str]:
"""Loads documents from a list of URLs."""
document_texts = []
for url in urls:
elements = partition_html(url=url)
text = "\n\n".join([str(el) for el in elements])
document_texts.append(text)
return document_texts
The previously-mentioned frameworks offer many more options when it comes to data ingestion, including the ability to load documents from a variety of sources, preprocess the text, and extract relevant features. For our purposes, though, we don't need anything too fancy. It also makes our pipeline easier to debug since we can see exactly what's being loaded and how it's being processed. You don't get that same level of visibility with more complex frameworks.
Preprocessing the data | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/rag-with-zenml/data-ingestion | 393 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β VERSION β 1 β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β VERSION_DESCRIPTION β Run #1 of the mlflow_training_pipeline. β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-03-01 09:09:06.899000 β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-03-01 09:09:06.899000 β | stack-components | https://docs.zenml.io/stack-components/model-registries/mlflow | 218 |
local --set
```
Example Command Output
```textConnected to the ZenML server: 'https://stefan.develaws.zenml.io'
Running with active workspace: 'default' (repository)
Stack 'aws-demo' successfully registered!
Active repository stack set to:'aws-demo'
```
Finally, run a simple pipeline to prove that everything works as expected. We'll use the simplest pipelines possible for this example:Copyfrom zenml import pipeline, step
@step
def step_1() -> str:
"""Returns the `world` string."""
return "world"
@step(enable_cache=False)
def step_2(input_one: str, input_two: str) -> None:
"""Combines the two strings at its input and prints them."""
combined_str = f"{input_one} {input_two}"
print(combined_str)
@pipeline
def my_pipeline():
output_step_one = step_1()
step_2(input_one="hello", input_two=output_step_one)
if __name__ == "__main__":
my_pipeline()Saving that to a run.py file and running it gives us:
Example Command Output
```text
$ python run.py
Reusing registered pipeline simple_pipeline (version: 1).
Building Docker image(s) for pipeline simple_pipeline.
Building Docker image 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml:simple_pipeline-orchestrator.
Including user-defined requirements: boto3==1.26.76
Including integration requirements: boto3, kubernetes==18.20.0, s3fs>2022.3.0,<=2023.4.0, sagemaker==2.117.0
No .dockerignore found, including all files inside build context.
Step 1/10 : FROM zenmldocker/zenml:0.39.1-py3.8
Step 2/10 : WORKDIR /app
Step 3/10 : COPY .zenml_user_requirements .
Step 4/10 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_user_requirements
Step 5/10 : COPY .zenml_integration_requirements .
Step 6/10 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_integration_requirements
Step 7/10 : ENV ZENML_ENABLE_REPO_INIT_WARNINGS=False
Step 8/10 : ENV ZENML_CONFIG_PATH=/app/.zenconfig
Step 9/10 : COPY . .
Step 10/10 : RUN chmod -R a+rw . | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 545 |
πCommunity & content
All possible ways for our community to get in touch with ZenML.
The ZenML team and community have put together a list of references that can be used to get in touch with the development team of ZenML and develop a deeper understanding of the framework.
Slack Channel: Get help from the community
The ZenML Slack channel is the main gathering point for the community. Not only is it the best place to get in touch with the core team of ZenML, but it is also a great way to discuss new ideas and share your ZenML projects with the community. If you have a question, there is a high chance someone else might have already answered it on Slack!
Social Media: Bite-sized updates
We are active on LinkedIn and Twitter where we post bite-sized updates on releases, events, and MLOps in general. Follow us to interact and stay up to date! We would appreciate it if you could comment on and share our posts so more people can benefit from our work at ZenML!
YouTube Channel: Video tutorials, workshops, and more
Our YouTube channel features a growing set of videos that take you through the entire framework. Go here if you are a visual learner, and follow along with some tutorials.
Public roadmap
The feedback from our community plays a significant role in the development of ZenML. That's why we have a public roadmap that serves as a bridge between our users and our development team. If you have ideas regarding any new features or want to prioritize one over the other, feel free to share your thoughts here or vote on existing ideas.
Blog
On our Blog page, you can find various articles written by our team. We use it as a platform to share our thoughts and explain the implementation process of our tool, its new features, and the thought process behind them.
Podcast | reference | https://docs.zenml.io/reference/community-and-content | 372 |
System Architectures
Different variations of the ZenML Cloud architecture depending on your needs.
If you're interested in assessing ZenML Cloud, you can create a free account, which defaults to a Scenario 1 deployment. To upgrade to different scenarios, please reach out to us.
The ZenML Pro offers many additional features to increase your teams productivity. No matter your specific needs, the hosting options for ZenML Pro range from easy SaaS integration to completely airgapped deployments on your own infrastructure.
A ZenML Pro deployment consists of the following moving pieces for both the SaaS product as well as the self-hosted version.:
ZenML Cloud API: This is a centralized MLOps control plane that includes a managed ZenML dashboard and a special ZenML server optimized for production MLOps workloads.
Single Sign-On (SSO): The ZenML Cloud API is integrated with Auth0 as an SSO provider to manage user authentication and authorization. Users can log in to the ZenML Cloud dashboard using their social media accounts or their corporate credentials.
Secrets Store: All secrets and credentials required to access customer infrastructure services are stored in a secure secrets store. The ZenML Cloud API has access to these secrets and uses them to access customer infrastructure services on behalf of the ZenML Cloud. The secrets store can be hosted either by the ZenML Cloud or by the customer.
ML Metadata Store: This is where all ZenML metadata is stored, including ML metadata such as tracking and versioning information about pipelines and models.
The above four interact with other MLOps stack components, secrets, and data in varying scenarios described below.
Scenario 1: Full SaaS
In this scenario, all services are hosted on the ZenML Cloud infrastructure. Customer secrets and credentials required to access customer infrastructure are stored and managed by the ZenML Cloud. | getting-started | https://docs.zenml.io/getting-started/zenml-pro/system-architectures | 369 |
from kubernetes.client.models import V1Tolerationvertex_settings = VertexOrchestratorSettings(
pod_settings={
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
"matchExpressions": [
"key": "node.kubernetes.io/name",
"operator": "In",
"values": ["my_powerful_node_group"],
},
"tolerations": [
V1Toleration(
key="node.kubernetes.io/name",
operator="Equal",
value="",
effect="NoSchedule"
If your pipelines steps have certain hardware requirements, you can specify them as ResourceSettings:
resource_settings = ResourceSettings(cpu_count=8, memory="16GB")
These settings can then be specified on either pipeline-level or step-level:
# Either specify on pipeline-level
@pipeline(
settings={
"orchestrator.vertex": vertex_settings,
"resources": resource_settings,
def my_pipeline():
...
# OR specify settings on step-level
@step(
settings={
"orchestrator.vertex": vertex_settings,
"resources": resource_settings,
def my_step():
...
Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings.
For more information and a full list of configurable attributes of the Vertex orchestrator, check out the SDK Docs .
Enabling CUDA for GPU-backed hardware
Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousKubernetes Orchestrator
NextAWS Sagemaker Orchestrator
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/orchestrators/vertex | 364 |
lues={
"username": "admin",
"password": "abc123"Other Client methods used for secrets management include get_secret to fetch a secret by name or id, update_secret to update an existing secret, list_secrets to query the secrets store using a variety of filtering and sorting criteria, and delete_secret to delete a secret. The full Client API reference is available here.
Set scope for secrets
ZenML secrets can be scoped to a workspace or a user. This allows you to create secrets that are only accessible within a specific workspace or to one user.
By default, all created secrets are scoped to the active workspace. To create a secret and scope it to your active user instead, you can pass the --scope argument to the CLI command:
zenml secret create <SECRET_NAME> \
--scope user \
--<KEY_1>=<VALUE_1> \
--<KEY_2>=<VALUE_2>
Scopes also act as individual namespaces. When you are referencing a secret by name in your pipelines and stacks, ZenML will first look for a secret with that name scoped to the active user, and if it doesn't find one, it will look for one in the active workspace.
Accessing registered secrets
Reference secrets in stack component attributes and settings
Some of the components in your stack require you to configure them with sensitive information like passwords or tokens, so they can connect to the underlying infrastructure. Secret references allow you to configure these components in a secure way by not specifying the value directly but instead referencing a secret by providing the secret name and key. Referencing a secret for the value of any string attribute of your stack components, simply specify the attribute using the following syntax: {{<SECRET_NAME>.<SECRET_KEY>}}
For example:
# Register a secret called `mlflow_secret` with key-value pairs for the
# username and password to authenticate with the MLflow tracking server
# Using central secrets management
zenml secret create mlflow_secret \
--username=admin \
--password=abc123 | how-to | https://docs.zenml.io/v/docs/how-to/interact-with-secrets | 411 |
by finetuning embeddingsFinetuning LLMs with ZenMLHow-ToπΈSet up a project repositoryConnect your git repositoryProject templatesBest practicesβοΈBuild a pipelineUse pipeline/step parametersStep output typing and annotationControl caching behaviorSchedule a pipelineDeleting a pipelineTrigger a pipeline from anotherRun pipelines asynchronouslyControl execution order of stepsUsing a custom step invocation IDName your pipeline runsUse failure/success hooksHyperparameter tuningVersion pipelinesAccess secrets in a stepFetching pipelinesGet past pipeline/step runsπUse configuration filesHow to configure a pipeline with a YAMLWhat can be configuredRuntime settings for Docker, resources, and stack componentsConfiguration hierarchyFind out which configuration was used for a runAutogenerate a template yaml fileπ³Customize Docker buildsDocker settings on a pipelineDocker settings on a stepSpecify pip dependencies and apt packagesUse your own DockerfilesWhich files are built into the imageReuse Docker builds to speed up Docker build timesUse code repositories to automate Docker build reuseBuild the pipeline without runningDefine where an image is builtπTrain with GPUsπ²Control loggingView logs on the dashboardEnable or disable logs storageSet logging verbosityDisable rich traceback outputDisable colorful loggingποΈHandle Data/ArtifactsHow ZenML stores dataReturn multiple outputs from a stepDelete an artifactOrganize data with tagsGet arbitrary artifacts in a stepHandle custom data typesLoad artifacts into memoryPassing artifacts between pipelinesπVisualizing artifactsDefault visualizationsCreating custom visualizationsDisplaying visualizations in the dashboardDisabling visualizationsπͺUse the Model Control PlaneRegistering a ModelDeleting a ModelAssociate a pipeline with a ModelConnecting artifacts via a ModelControlling Model versionsLoad a Model in codePromote a ModelLinking model binaries/data to a ModelLoad artifacts from ModelπTrack metrics and metadataAttach metadata to a modelAttach metadata to an artifactAttach metadata to stepsGroup metadataSpecial Metadata TypesFetch metadata within stepsFetch metadata during pipeline compositionπ¨βπ€Popular integrationsRun on AWSRun on GCPKubeflowKubernetesMLflowSkypilotπConnect services (AWS, GCP, Azure, K8s etc)Service Connectors guideSecurity best practicesDocker Service ConnectorKubernetes Service ConnectorAWS Service ConnectorGCP Service ConnectorAzure Service ConnectorHyperAI Service ConnectorβοΈManage stacksDeploy a stack componentDeploy a stack using mlstacksReference secrets in stack configurationImplement a custom stack componentImplement a custom integrationπConfigure Python environmentsHandling dependenciesConfigure the server environmentπConnect to a serverConnect in with your User (interactive)Connect with a Service AccountπInteract with secretsπDebug and solve issues | stacks-and-components | https://docs.zenml.io/stacks-and-components/component-guide/annotators/label-studio | 555 |
issions that are granted to the connector clients.To find out more about Application Default Credentials, see the GCP ADC documentation.
A GCP project is required and the connector may only be used to access GCP resources in the specified project. When used remotely in a GCP workload, the configured project has to be the same as the project of the attached service account.
The following assumes the local GCP CLI has already been configured with user account credentials by running the gcloud auth application-default login command:
zenml service-connector register gcp-implicit --type gcp --auth-method implicit --auto-configure
Example Command Output
Successfully registered service connector `gcp-implicit` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π΅ gcp-generic β zenml-core β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-bucket-sl β
β β gs://zenml-core.appspot.com β
β β gs://zenml-core_cloudbuild β
β β gs://zenml-datasets β
β β gs://zenml-internal-artifact-store β
β β gs://zenml-kubeflow-artifact-store β
β β gs://zenml-project-time-series-bucket β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenml-test-cluster β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β gcr.io/zenml-core β | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 453 |
βββββ―ββββββββββ―βββββββββββββ―βββββββββββββββββββββββ ACTIVE β NAME β ID β TYPE β RESOURCE TYPES β RESOURCE NAME β SHARED β OWNER β EXPIRES IN β LABELS β
β βββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββΌββββββββββββββββββββββ¨
β β aws-iam-multi-eu β e33c9fac-5daa-48b2-87bb-0187 β πΆ aws β πΆ aws-generic β <multiple> β β β default β β region:eu-central-1 β
β β β d3782cde β β π¦ s3-bucket β β β β β β
β β β β β π kubernetes-cluster β β β β β β
β β β β β π³ docker-registry β β β β β β
β βββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββΌββββββββββββββββββββββ¨
β β aws-iam-multi-us β ed528d5a-d6cb-4fc4-bc52-c3d2 β πΆ aws β πΆ aws-generic β <multiple> β β β default β β region:us-east-1 β
β β β d01643e5 β β π¦ s3-bucket β β β β β β
β β β β β π kubernetes-cluster β β β β β β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 471 |
gin gcp-zenml-core --resource-type docker-registryβ Attempting to configure local client using service connector 'gcp-zenml-core'...
WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
The 'gcp-zenml-core' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK.
Example Command Output
$ zenml container-registry connect gcp-zenml-core --connector gcp-zenml-core
Successfully connected container registry `gcp-zenml-core` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―ββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββ¨
β 561b776a-af8b-491c-a4ed-14349b440f30 β gcp-zenml-core β π΅ gcp β π³ docker-registry β gcr.io/zenml-core β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·ββββββββββββββββββββ
As a final step, you can use the GCP Container Registry in a ZenML Stack:
# Register and set a stack with the new container registry
zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set
For more information and a full list of configurable attributes of the GCP container registry, check out the SDK Docs .
PreviousAmazon Elastic Container Registry (ECR)
NextAzure Container Registry
Last updated 18 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/gcp | 492 |
s within a unique directory in the artifact store:Materializers are designed to be extensible and customizable, allowing you to define your own serialization and deserialization logic for specific data types or storage systems. By default, ZenML provides built-in materializers for common data types and uses cloudpickle to pickle objects where there is no default materializer. If you want direct control over how objects are serialized, you can easily create custom materializers by extending the BaseMaterializer class and implementing the required methods for your specific use case. Read more about materializers here.
ZenML provides a built-in CloudpickleMaterializer that can handle any object by saving it with cloudpickle. However, this is not production-ready because the resulting artifacts cannot be loaded when running with a different Python version. In such cases, you should consider building a custom Materializer to save your objects in a more robust and efficient format.
Moreover, using the CloudpickleMaterializer could allow users to upload of any kind of object. This could be exploited to upload a malicious file, which could execute arbitrary code on the vulnerable system.
When a pipeline runs, ZenML uses the appropriate materializers to save and load artifacts using the ZenML fileio system (built to work across multiple artifact stores). This not only simplifies the process of working with different data formats and storage systems but also enables artifact caching and lineage tracking. You can see an example of a default materializer (the numpy materializer) in action here.
PreviousHandle Data/Artifacts
NextReturn multiple outputs from a step
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/artifact-versioning | 318 |
kg.dev/<PROJECT_ID>/<REPOSITORY_NAME>
# Examples:europe-west1-docker.pkg.dev/zenml/my-repo
southamerica-east1-docker.pkg.dev/zenml/zenml-test
asia-docker.pkg.dev/my-project/another-repo
To figure out the URI for your registry:
Go here and select the repository that you want to use to store Docker images. If you don't have a repository yet, take a look at the deployment section.
On the top, click the copy button to copy the full repository URL.
Infrastructure Deployment
A GCP Container Registry can be deployed directly from the ZenML CLI:
zenml container-registry deploy gcp_container_registry --flavor=gcp --provider=gcp ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
How to use it
To use the GCP container registry, we need:
Docker installed and running.
The registry URI. Check out the previous section on the URI format and how to get the URI for your registry.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=gcp \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME>
You also need to set up authentication required to log in to the container registry.
Authentication Methods
Integrating and using a GCP Container Registry in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Local Authentication method. However, the recommended way to authenticate to the GCP cloud platform is through a GCP Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the GCP Container Registry with other remote stack components also running in GCP. | stack-components | https://docs.zenml.io/stack-components/container-registries/gcp | 418 |
lly registered orchestrator `<ORCHESTRATOR_NAME>`.$ zenml service-connector list-resources --resource-type kubernetes-cluster -e
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β e33c9fac-5daa-48b2-87bb-0187d3782cde β aws-iam-multi-eu β πΆ aws β π kubernetes-cluster β kubeflowmultitenant β
β β β β β zenbox β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 β aws-iam-multi-us β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β 1c54b32a-4889-4417-abbd-42d3ace3d03a β gcp-sa-multi β π΅ gcp β π kubernetes-cluster β zenml-test-cluster β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββ | stack-components | https://docs.zenml.io/stack-components/orchestrators/kubeflow | 508 |
Azure Blob Storage
Storing artifacts using Azure Blob Storage
The Azure Artifact Store is an Artifact Store flavor provided with the Azure ZenML integration that uses the Azure Blob Storage managed object storage service to store ZenML artifacts in an Azure Blob Storage container.
When would you want to use it?
Running ZenML pipelines with the local Artifact Store is usually sufficient if you just want to evaluate ZenML or get started quickly without incurring the trouble and the cost of employing cloud storage services in your stack. However, the local Artifact Store becomes insufficient or unsuitable if you have more elaborate needs for your project:
if you want to share your pipeline run results with other team members or stakeholders inside or outside your organization
if you have other components in your stack that are running remotely (e.g. a Kubeflow or Kubernetes Orchestrator running in a public cloud).
if you outgrow what your local machine can offer in terms of storage space and need to use some form of private or public storage service that is shared with others
if you are running pipelines at scale and need an Artifact Store that can handle the demands of production-grade MLOps
In all these cases, you need an Artifact Store that is backed by a form of public cloud or self-hosted shared object storage service.
You should use the Azure Artifact Store when you decide to keep your ZenML artifacts in a shared object storage and if you have access to the Azure Blob Storage managed service. You should consider one of the other Artifact Store flavors if you don't have access to the Azure Blob Storage service.
How do you deploy it?
The Azure Artifact Store flavor is provided by the Azure ZenML integration, you need to install it on your local machine to be able to register an Azure Artifact Store and add it to your stack:
zenml integration install azure -y | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/azure | 366 |
Deleting a pipeline
Learn how to delete pipelines.
Delete the latest version of a pipeline
zenml pipeline delete <PIPELINE_NAME>
from zenml.client import Client
Client().delete_pipeline(<PIPELINE_NAME>)
Delete a specific version of a pipeline
zenml pipeline delete <PIPELINE_NAME> --version=<VERSION_NAME>
from zenml.client import Client
Client().delete_pipeline(<PIPELINE_NAME>, version=<VERSION_NAME>)
Delete all versions of a pipeline
zenml pipeline delete <PIPELINE_NAME> --all-versions
from zenml.client import Client
Client().delete_pipeline(<PIPELINE_NAME>, all_versions=True)
Delete a pipeline run
Deleting a pipeline does not automatically delete any of its associated runs or artifacts. To delete a pipeline run, you can use the following command:
zenml pipeline runs delete <RUN_NAME_OR_ID>
from zenml.client import Client
Client().delete_pipeline_run(<RUN_NAME_OR_ID>)
PreviousSchedule a pipeline
NextTrigger a pipeline from another
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/delete-a-pipeline | 211 |
your terminal.
Understanding steps and artifactsWhen you ran the pipeline, each individual function that ran is shown in the DAG visualization as a step and is marked with the function name. Steps are connected with artifacts, which are simply the objects that are returned by these functions and input into downstream functions. This simple logic lets us break down our entire machine learning code into a sequence of tasks that pass data between each other.
The artifacts produced by your steps are automatically stored and versioned by ZenML. The code that produced these artifacts is also automatically tracked. The parameters and all other configuration is also automatically captured.
So you can see, by simply structuring your code within some functions and adding some decorators, we are one step closer to having a more tracked and reproducible codebase!
Expanding to a Full Machine Learning Workflow
With the fundamentals in hand, letβs escalate our simple pipeline to a complete ML workflow. For this task, we will use the well-known Iris dataset to train a Support Vector Classifier (SVC).
Let's start with the imports.
from typing_extensions import Annotated # or `from typing import Annotated on Python 3.9+
from typing import Tuple
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.base import ClassifierMixin
from sklearn.svm import SVC
from zenml import pipeline, step
Make sure to install the requirements as well:
pip install matplotlib
zenml integration install sklearn -y
In this case, ZenML has an integration with sklearn so you can use the ZenML CLI to install the right version directly. | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/create-an-ml-pipeline | 329 |
Get arbitrary artifacts in a step
Not all artifacts need to come through the step interface from direct upstream steps.
As described in the metadata guide, the metadata can be fetched with the client, and this is how you would use it to fetch it within a step. This allows you to fetch artifacts from other upstream steps or even completely different pipelines.
from zenml.client import Client
from zenml import step
@step
def my_step():
client = Client()
# Directly fetch an artifact
output = client.get_artifact_version("my_dataset", "my_version")
output.run_metadata["accuracy"].value
This is one of the ways you can access artifacts that have already been created and stored in the artifact store. This can be useful when you want to use artifacts from other pipelines or steps that are not directly upstream.
See Also
Managing artifacts - learn about the ExternalArtifact type and how to pass artifacts between steps.
PreviousOrganize data with tags
NextHandle custom data types
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/get-arbitrary-artifacts-in-a-step | 205 |
kip scoping its Resource Type during registration.a multi-instance Service Connector instance can be configured once and used to gain access to multiple resources of the same type, each identifiable by a Resource Name. Not all types of connectors and not all types of resources support multiple instances. Some Service Connectors Types like the generic Kubernetes and Docker connector types only allow single-instance configurations: a Service Connector instance can only be used to access a single Kubernetes cluster and a single Docker registry. To configure a multi-instance Service Connector, you can simply skip scoping its Resource Name during registration.
The following is an example of configuring a multi-type AWS Service Connector instance capable of accessing multiple AWS resources of different types:
zenml service-connector register aws-multi-type --type aws --auto-configure
Example Command Output
β Registering service connector 'aws-multi-type'...
Successfully registered service connector `aws-multi-type` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://aws-ia-mwaa-715803424590 β
β β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β β s3://zenml-public-datasets β
β β s3://zenml-public-swagger-spec β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 437 |
AI.
4) Create a JSON Key for your service accountThis json file will allow the service account to assume the identity of this service account. You will need the filepath of the downloaded file in the next step.
export JSON_KEY_FILE_PATH=<JSON_KEY_FILE_PATH>
5) Create a Service Connector within ZenML
The service connector will allow ZenML and other ZenML components to authenticate themselves with GCP.
zenml integration install gcp \
&& zenml service-connector register gcp_connector \
--type gcp \
--auth-method service-account \
--service_account_json=@${JSON_KEY_FILE_PATH} \
--project_id=<GCP_PROJECT_ID>
6) Create Stack Components
Artifact Store
Before you run anything within the ZenML CLI, head on over to GCP and create a GCS bucket, in case you don't already have one that you can use. Once this is done, you can create the ZenML stack component as follows:
export ARTIFACT_STORE_NAME=gcp_artifact_store
# Register the GCS artifact-store and reference the target GCS bucket
zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp \
--path=gs://<YOUR_BUCKET_NAME>
# Connect the GCS artifact-store to the target bucket via a GCP Service Connector
zenml artifact-store connect ${ARTIFACT_STORE_NAME} -i
Head on over to our docs to learn more about artifact stores and how to configure them.
Orchestrator
This guide will use Vertex AI as the orchestrator to run the pipelines. As a serverless service Vertex is a great choice for quick prototyping of your MLOps stack. The orchestrator can be switched out at any point in the future for a more use-case- and budget-appropriate solution.
export ORCHESTRATOR_NAME=gcp_vertex_orchestrator
# Register the GCS artifact-store and reference the target GCS bucket
zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex
--project=<PROJECT_NAME> --location=europe-west2
# Connect the GCS orchestrator to the target gcp project via a GCP Service Connector
zenml orchestrator connect ${ORCHESTRATOR_NAME} -i | how-to | https://docs.zenml.io/how-to/popular-integrations/gcp-guide | 460 |
ervice Connector credentials are actually working.When configuring local CLI utilities with credentials extracted from Service Connectors, keep in mind that most Service Connectors, particularly those used with cloud platforms, usually exercise the security best practice of issuing temporary credentials such as API tokens. The implication is that your local CLI may only be allowed access to the remote service for a short time before those credentials expire, then you need to fetch another set of credentials from the Service Connector.
The following examples show how the local Kubernetes kubectl CLI can be configured with credentials issued by a Service Connector and then used to access a Kubernetes cluster directly:
zenml service-connector list-resources --resource-type kubernetes-cluster
Example Command Output
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β 9d953320-3560-4a78-817c-926a3898064d β gcp-user-multi β π΅ gcp β π kubernetes-cluster β zenml-test-cluster β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 424 |
nt import Client
@pipeline
def do_predictions():# model name and version are directly passed into client method
model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION)
inference_data = load_data()
predict(
# Here, we load in the `trained_model` from a trainer step
model=model.get_model_artifact("trained_model"),
data=inference_data,
In this case the evaluation of the actual artifact will happen only when the step is actually running.
PreviousLinking model binaries/data to a Model
NextTrack metrics and metadata
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/load-artifacts-from-model | 121 |
βββ·βββββββββββββββββββββββββββββββββββββββββββββββThe Service Connector configuration shows that the connector is configured with an STS token:
zenml service-connector describe aws-sts-token
Example Command Output
Service connector 'aws-sts-token' of type 'aws' with id '63e14350-6719-4255-b3f5-0539c8f7c303' is owned by user 'default' and is 'private'.
'aws-sts-token' aws Service Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β a05ef4ef-92cb-46b2-8a3a-a48535adccaf β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-sts-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β sts-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β πΆ aws-generic, π¦ s3-bucket, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β bffd79c7-6d76-483b-9001-e9dda4e865ae β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 492 |
gs://zenml-bucket-sl βββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββ
After having set up or decided on a GCP Service Connector to use to connect to the target GCS bucket, you can register the GCS Artifact Store as follows:
# Register the GCS artifact-store and reference the target GCS bucket
zenml artifact-store register <GCS_STORE_NAME> -f gcp \
--path='gs://your-bucket'
# Connect the GCS artifact-store to the target bucket via a GCP Service Connector
zenml artifact-store connect <GCS_STORE_NAME> -i
A non-interactive version that connects the GCS Artifact Store to a target GCP Service Connector:
zenml artifact-store connect <GCS_STORE_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml artifact-store connect gcs-zenml-bucket-sl --connector gcs-zenml-bucket-sl
Successfully connected artifact store `gcs-zenml-bucket-sl` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―βββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββββββββ¨
β 2a0bec1b-9787-4bd7-8d4a-9a47b6f61643 β gcs-zenml-bucket-sl β π΅ gcp β π¦ gcs-bucket β gs://zenml-bucket-sl β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββββββββ
As a final step, you can use the GCS Artifact Store in a ZenML Stack:
# Register and set a stack with the new artifact store
zenml stack register <STACK_NAME> -a <GCS_STORE_NAME> ... --set
When you register the GCS Artifact Store, you can generate a GCP Service Account Key , store it in a ZenML Secret and then reference it in the Artifact Store configuration.
This method has some advantages over the implicit authentication method: | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/gcp | 647 |
tive Directory credentials or generic OIDC tokens.This authentication method only requires a GCP workload identity external account JSON file that only contains the configuration for the external account without any sensitive credentials. It allows implementing a two layer authentication scheme that keeps the set of permissions associated with implicit credentials down to the bare minimum and grants permissions to the privilege-bearing GCP service account instead.
This authentication method can be used to authenticate to GCP services using credentials from other cloud providers or identity providers. When used with workloads running on AWS or Azure, it involves automatically picking up credentials from the AWS IAM or Azure AD identity associated with the workload and using them to authenticate to GCP services. This means that the result depends on the environment where the ZenML server is deployed and is thus not fully reproducible.
When used with AWS or Azure implicit in-cloud authentication, this method may constitute a security risk, because it can give users access to the identity (e.g. AWS IAM role or Azure AD principal) implicitly associated with the environment where the ZenML server is running. For this reason, all implicit authentication methods are disabled by default and need to be explicitly enabled by setting the ZENML_ENABLE_IMPLICIT_AUTH_METHODS environment variable or the helm chart enableImplicitAuthMethods configuration option to true in the ZenML deployment.
By default, the GCP connector generates temporary OAuth 2.0 tokens from the external account credentials and distributes them to clients. The tokens have a limited lifetime of 1 hour. This behavior can be disabled by setting the generate_temporary_tokens configuration option to False, in which case, the connector will distribute the external account credentials JSON to clients instead (not recommended). | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 335 |
Return multiple outputs from a step
Use Annotated to return multiple outputs from a step and name them for easy retrieval and dashboard display.
You can use the Annotated type to return multiple outputs from a step and give each output a name. Naming your step outputs will help you retrieve the specific artifact later and also improves the readability of your pipeline's dashboard.
from typing import Annotated, Tuple
import pandas as pd
from zenml import step
@step
def clean_data(
data: pd.DataFrame,
) -> Tuple[
Annotated[pd.DataFrame, "x_train"],
Annotated[pd.DataFrame, "x_test"],
Annotated[pd.Series, "y_train"],
Annotated[pd.Series, "y_test"],
]:
from sklearn.model_selection import train_test_split
x = data.drop("target", axis=1)
y = data["target"]
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
return x_train, x_test, y_train, y_test
Inside the step, we split the input data into features (x) and target (y), and then use train_test_split from scikit-learn to split the data into training and testing sets. The resulting DataFrames and Series are returned as a tuple, with each element annotated with its respective name.
By using Annotated, we can easily identify and retrieve specific artifacts later in the pipeline. Additionally, the names will be displayed on the pipeline's dashboard, making it more readable and understandable.
PreviousHow ZenML stores data
NextDelete an artifact
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/return-multiple-outputs-from-a-step | 339 |
for secrets that have a name starting with zenml-This can be achieved by creating two custom IAM roles and attaching them to the principal (e.g. user or service account) that will be used to access the GCP Secrets Manager API with a condition configured when attaching the second role to limit access to secrets with a name prefix of zenml-. The following gcloud CLI command examples can be used as a starting point:
gcloud iam roles create ZenMLServerSecretsStoreCreator \
--project <your GCP project ID> \
--title "ZenML Server Secrets Store Creator" \
--description "Allow the ZenML Server to create new secrets" \
--stage GA \
--permissions "secretmanager.secrets.create"
gcloud iam roles create ZenMLServerSecretsStoreEditor \
--project <your GCP project ID> \
--title "ZenML Server Secrets Store Editor" \
--description "Allow the ZenML Server to manage its secrets" \
--stage GA \
--permissions "secretmanager.secrets.get,secretmanager.secrets.update,secretmanager.versions.access,secretmanager.versions.add,secretmanager.secrets.delete"
gcloud projects add-iam-policy-binding <your GCP project ID> \
--member serviceAccount:<your GCP service account email> \
--role projects/<your GCP project ID>/roles/ZenMLServerSecretsStoreCreator \
--condition None
# NOTE: use the GCP project NUMBER, not the project ID in the condition
gcloud projects add-iam-policy-binding <your GCP project ID> \
--member serviceAccount:<your GCP service account email> \
--role projects/<your GCP project ID>/roles/ZenMLServerSecretsStoreEditor \
--condition 'title=limit_access_zenml,description="Limit access to secrets with prefix zenml-",expression=resource.name.startsWith("projects/<your GCP project NUMBER>/secrets/zenml-")'
The following configuration options are supported:
ZENML_SECRETS_STORE_AUTH_METHOD: The GCP Service Connector authentication method to use (e.g. service-account). | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker | 428 |
with same pipeline name, step name and model nameexisting_services = model_deployer.find_model_server(
pipeline_name=pipeline_name,
pipeline_step_name=pipeline_step_name,
model_name=model_name,
if not existing_services:
raise RuntimeError(
f"No MLflow prediction service deployed by step "
f"'{pipeline_step_name}' in pipeline '{pipeline_name}' with name "
f"'{model_name}' is currently running."
service = existing_services[0]
# Let's try run a inference request against the prediction service
payload = json.dumps(
"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]},
"params": {
"temperature": 0.5,
"max_tokens": 20,
},
response = requests.post(
url=service.get_prediction_url(),
data=payload,
headers={"Content-Type": "application/json"},
response.json()
Within the same pipeline, use the service from previous step to run inference this time using pre-built predict method
from typing_extensions import Annotated
import numpy as np
from zenml import step
from zenml.integrations.mlflow.services import MLFlowDeploymentService
# Use the service for inference
@step
def predictor(
service: MLFlowDeploymentService,
data: np.ndarray,
) -> Annotated[np.ndarray, "predictions"]:
"""Run a inference request against a prediction service"""
prediction = service.predict(data)
prediction = prediction.argmax(axis=-1)
return prediction
For more information and a full list of configurable attributes of the MLflow Model Deployer, check out the SDK Docs .
PreviousModel Deployers
NextSeldon
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/model-deployers/mlflow | 336 |
Migration guide 0.58.2 β 0.60.0
How to migrate from ZenML 0.58.2 to 0.60.0 (Pydantic 2 edition).
ZenML now uses Pydantic v2. π₯³
This upgrade comes with a set of critical updates. While your user experience mostly remains unaffected, you might see unexpected behavior due to the changes in our dependencies. Moreover, since Pydantic v2 provides a slightly stricter validation process, you might end up bumping into some validation errors which was not caught before, but it is all for the better π If you run into any other errors, please let us know either on GitHub or on our Slack.
Changes in some of the critical dependencies
SQLModel is one of the core dependencies of ZenML and prior to this upgrade, we were utilizing version 0.0.8. However, this version is relatively outdated and incompatible with Pydantic v2. Within the scope of this upgrade, we upgraded it to 0.0.18.
Due to the change in the SQLModel version, we also had to upgrade our SQLAlchemy dependency from V1 to v2. While this does not affect the way that you are using ZenML, if you are using SQLAlchemy in your environment, you might have to migrate your code as well. For a detailed list of changes, feel free to check their migration guide.
Changes in pydantic
Pydantic v2 brings a lot of new and exciting changes to the table. The core logic now uses Rust and it is much faster and more efficient in terms of performance. On top of it, the main concepts like model design, configuration, validation, or serialization now include a lot of new cool features. If you are using pydantic in your workflow and are interested in the new changes, you can check the brilliant migration guide provided by the pydantic team to see the full list of changes.
Changes in our integrations changes | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-sixty | 405 |
Airflow Orchestrator
Orchestrating your pipelines to run on Airflow.
ZenML pipelines can be executed natively as Airflow DAGs. This brings together the power of the Airflow orchestration with the ML-specific benefits of ZenML pipelines. Each ZenML step runs in a separate Docker container which is scheduled and started using Airflow.
If you're going to use a remote deployment of Airflow, you'll also need a remote ZenML deployment.
When to use it
You should use the Airflow orchestrator if
you're looking for a proven production-grade orchestrator.
you're already using Airflow.
you want to run your pipelines locally.
you're willing to deploy and maintain Airflow.
How to deploy it
The Airflow orchestrator can be used to run pipelines locally as well as remotely. In the local case, no additional setup is necessary.
There are many options to use a deployed Airflow server:
Use one of ZenML's Airflow stack recipes. This is the simplest solution to get ZenML working with Airflow, as the recipe also takes care of additional steps such as installing required Python dependencies in your Airflow server environment.
Use a managed deployment of Airflow such as Google Cloud Composer , Amazon MWAA, or Astronomer.
Deploy Airflow manually. Check out the official Airflow docs for more information.
If you're not using mlstacks to deploy Airflow, there are some additional Python packages that you'll need to install in the Python environment of your Airflow server:
pydantic~=1.9.2: The Airflow DAG files that ZenML creates for you require Pydantic to parse and validate configuration files.
apache-airflow-providers-docker or apache-airflow-providers-cncf-kubernetes, depending on which Airflow operator you'll be using to run your pipeline steps. Check out this section for more information on supported operators.
How to use it
To use the Airflow orchestrator, we need: | stack-components | https://docs.zenml.io/stack-components/orchestrators/airflow | 405 |
BentoML
Deploying your models locally with BentoML.
BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment.
The BentoML Model Deployer is one of the available flavors of the Model Deployer stack component. Provided with the BentoML integration it can be used to deploy and manage BentoML models or Bento on a local running HTTP server.
The BentoML Model Deployer can be used to deploy models for local development and production use cases. While the integration mainly works in a local environment where pipelines are run, the used Bento can be exported and containerized, and deployed in a remote environment. Within the BentoML ecosystem, Yatai and bentoctl are the tools responsible for deploying the Bentos into the Kubernetes cluster and Cloud Platforms. Full support for these advanced tools is in progress and will be available soon.
When to use it?
You should use the BentoML Model Deployer to:
Standardize the way you deploy your models to production within your organization.
if you are looking to deploy your models in a simple way, while you are still able to transform your model into a production-ready solution when that time comes.
If you are looking to deploy your models with other Kubernetes-based solutions, you can take a look at one of the other Model Deployer Flavors available in ZenML.
BentoML also allows you to deploy your models in a more complex production-grade setting. Bentoctl is one of the tools that can help you get there. Bentoctl takes your built Bento from a ZenML pipeline and deploys it with bentoctl into a cloud environment such as AWS Lambda, AWS SageMaker, Google Cloud Functions, Google Cloud AI Platform, or Azure Functions. Read more about this in the From Local to Cloud with bentoctl section. | stack-components | https://docs.zenml.io/stack-components/model-deployers/bentoml | 390 |
DockerHub
Storing container images in DockerHub.
The DockerHub container registry is a container registry flavor that comes built-in with ZenML and uses DockerHub to store container images.
When to use it
You should use the DockerHub container registry if:
one or more components of your stack need to pull or push container images.
you have a DockerHub account. If you're not using DockerHub, take a look at the other container registry flavors.
How to deploy it
To use the DockerHub container registry, all you need to do is create a DockerHub account.
When this container registry is used in a ZenML stack, the Docker images that are built will be published in a ** public** repository and everyone will be able to pull your images. If you want to use a private repository instead, you'll have to create a private repository on the website before running the pipeline. The repository name depends on the remote orchestrator or step operator that you're using in your stack.
How to find the registry URI
The DockerHub container registry URI should have one of the two following formats:
<ACCOUNT_NAME>
# or
docker.io/<ACCOUNT_NAME>
# Examples:
zenml
my-username
docker.io/zenml
docker.io/my-username
To figure out the URI for your registry:
Find out the account name of your DockerHub account.
Use the account name to fill the template docker.io/<ACCOUNT_NAME> and get your URI.
How to use it
To use the Azure container registry, we need:
Docker installed and running.
The registry URI. Check out the previous section on the URI format and how to get the URI for your registry.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=dockerhub \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME> | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/dockerhub | 399 |
p-demo-multi
```
Example Command Output
```textRunning with active workspace: 'default' (global)
Running with active stack: 'default' (global)
Successfully connected container registry `gcr-zenml-core` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―ββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββ¨
β eeeabc13-9203-463b-aa52-216e629e903c β gcp-demo-multi β π΅ gcp β π³ docker-registry β gcr.io/zenml-core β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·ββββββββββββββββββββ
```
Combine all Stack Components together into a Stack and set it as active (also throw in a local Image Builder for completion):Copyzenml image-builder register local --flavor local
Example Command Output
```text
Running with active workspace: 'default' (global)
Running with active stack: 'default' (global)
Successfully registered image_builder `local`.
```
```sh
zenml stack register gcp-demo -a gcs-zenml-bucket-sl -o gke-zenml-test-cluster -c gcr-zenml-core -i local --set
```
Example Command Output
```text
Running with active workspace: 'default' (global)
Stack 'gcp-demo' successfully registered!
Active global stack set to:'gcp-demo'
```
Finally, run a simple pipeline to prove that everything works as expected. We'll use the simplest pipelines possible for this example:Copyfrom zenml import pipeline, step
@step
def step_1() -> str:
"""Returns the `world` string."""
return "world"
@step(enable_cache=False)
def step_2(input_one: str, input_two: str) -> None:
"""Combines the two strings at its input and prints them."""
combined_str = f"{input_one} {input_two}"
print(combined_str) | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 569 |
as the ZenML quickstart. You can clone it like so:git clone --depth 1 git@github.com:zenml-io/zenml.git
cd zenml/examples/quickstart
pip install -r requirements.txt
zenml init
To run a pipeline using the new stack:
Set the stack as active on your clientCopyzenml stack set a_new_local_stack
Run your pipeline code:Copypython run.py --training-pipeline
Keep this code handy as we'll be using it in the next chapters!
PreviousDeploying ZenML
NextConnecting remote storage
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/production-guide/understand-stacks | 126 |
ervice-principal
```
Example Command Output
```Successfully connected orchestrator `aks-demo-cluster` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β f2316191-d20b-4348-a68b-f5e347862196 β azure-service-principal β π¦ azure β π kubernetes-cluster β demo-zenml-demos/demo-zenml-terraform-cluster β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββ
```
Register and connect an Azure Container Registry Stack Component to an ACR container registry:Copyzenml container-registry register acr-demo-registry --flavor azure --uri=demozenmlcontainerregistry.azurecr.io
Example Command Output
```
Successfully registered container_registry `acr-demo-registry`.
```
```sh
zenml container-registry connect acr-demo-registry --connector azure-service-principal
```
Example Command Output
```
Successfully connected container registry `acr-demo-registry` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β f2316191-d20b-4348-a68b-f5e347862196 β azure-service-principal β π¦ azure β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 626 |
access temporarily with someone else in your team.Using other authentication methods like IAM role, Session Token, or Federation Token will automatically generate and refresh STS tokens for clients upon request.
An AWS region is required and the connector may only be used to access AWS resources in the specified region.
Fetching STS tokens from the local AWS CLI is possible if the AWS CLI is already configured with valid credentials. In our example, the connectors AWS CLI profile is configured with an IAM user Secret Key. We need to force the ZenML CLI to use the STS token authentication by passing the --auth-method sts-token option, otherwise it would automatically use the session token authentication method:
AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token
Example Command Output
β Έ Registering service connector 'aws-sts-token'...
Successfully registered service connector `aws-sts-token` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 482 |
Kubernetes
Learn how to deploy ZenML pipelines on a Kubernetes cluster.
The ZenML Kubernetes Orchestrator allows you to run your ML pipelines on a Kubernetes cluster without writing Kubernetes code. It's a lightweight alternative to more complex orchestrators like Airflow or Kubeflow.
Prerequisites
To use the Kubernetes Orchestrator, you'll need:
ZenML kubernetes integration installed (zenml integration install kubernetes)
Docker installed and running
kubectl installed
A remote artifact store and container registry in your ZenML stack
A deployed Kubernetes cluster
A configured kubectl context pointing to the cluster (optional, see below)
Deploying the Orchestrator
You can deploy the orchestrator from the ZenML CLI:
zenml orchestrator deploy k8s_orchestrator --flavor=kubernetes --provider=<YOUR_PROVIDER>
Configuring the Orchestrator
There are two ways to configure the orchestrator:
Using a Service Connector to connect to the remote cluster. This is the recommended approach, especially for cloud-managed clusters. No local kubectl context is needed.
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubernetes
zenml service-connector list-resources --resource-type kubernetes-cluster -e
zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME>
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
Configuring kubectl with a context pointing to the remote cluster and setting the kubernetes_context in the orchestrator config:
zenml orchestrator register <ORCHESTRATOR_NAME> \
--flavor=kubernetes \
--kubernetes_context=<KUBERNETES_CONTEXT>
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
Running a Pipeline
Once configured, you can run any ZenML pipeline using the Kubernetes Orchestrator:
python your_pipeline.py
This will create a Kubernetes pod for each step in your pipeline. You can interact with the pods using kubectl commands. | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations/kubernetes | 418 |
> \
--default_slack_channel_id=<SLACK_CHANNEL_ID>Here is where you can find the required parameters:
<SLACK_CHANNEL_ID>: Open your desired Slack channel in a browser, and copy out the last part of the URL starting with C.....
<SLACK_TOKEN>: This is the Slack token of your bot. You can find it in the Slack app settings under OAuth & Permissions. IMPORTANT: Please make sure that the token is the Bot User OAuth Token not the User OAuth Token.
After you have registered the slack_alerter, you can add it to your stack like this:
zenml stack register ... -al slack_alerter
How to Use the Slack Alerter
After you have a SlackAlerter configured in your stack, you can directly import the slack_alerter_post_step and slack_alerter_ask_step steps and use them in your pipelines.
Since these steps expect a string message as input (which needs to be the output of another step), you typically also need to define a dedicated formatter step that takes whatever data you want to communicate and generates the string message that the alerter should post.
As an example, adding slack_alerter_ask_step() to your pipeline could look like this:
from zenml.integrations.slack.steps.slack_alerter_ask_step import slack_alerter_ask_step
from zenml import step, pipeline
@step
def my_formatter_step(artifact_to_be_communicated) -> str:
return f"Here is my artifact {artifact_to_be_communicated}!"
@pipeline
def my_pipeline(...):
...
artifact_to_be_communicated = ...
message = my_formatter_step(artifact_to_be_communicated)
approved = slack_alerter_ask_step(message)
... # Potentially have different behavior in subsequent steps if `approved`
if __name__ == "__main__":
my_pipeline()
An example of adding a custom Slack block as part of any alerter logic for your pipeline could look like this:
from typing import List, Dict
from zenml.integrations.slack.steps.slack_alerter_ask_step import slack_alerter_post_step
from zenml.integrations.slack.alerters.slack_alerter import SlackAlerterParameters
from zenml import step, pipeline
@step | stack-components | https://docs.zenml.io/stack-components/alerters/slack | 470 |
ettings to specify AzureML step operator settings.Difference between stack component settings at registration-time vs real-time
For stack-component-specific settings, you might be wondering what the difference is between these and the configuration passed in while doing zenml stack-component register <NAME> --config1=configvalue --config2=configvalue, etc. The answer is that the configuration passed in at registration time is static and fixed throughout all pipeline runs, while the settings can change.
A good example of this is the MLflow Experiment Tracker, where configuration which remains static such as the tracking_url is sent through at registration time, while runtime configuration such as the experiment_name (which might change every pipeline run) is sent through as runtime settings.
Even though settings can be overridden at runtime, you can also specify default values for settings while configuring a stack component. For example, you could set a default value for the nested setting of your MLflow experiment tracker: zenml experiment-tracker register <NAME> --flavor=mlflow --nested=True
This means that all pipelines that run using this experiment tracker use nested MLflow runs unless overridden by specifying settings for the pipeline at runtime.
Using the right key for Stack-component-specific settings
When specifying stack-component-specific settings, a key needs to be passed. This key should always correspond to the pattern: <COMPONENT_CATEGORY>.<COMPONENT_FLAVOR>
For example, the SagemakerStepOperator supports passing in estimator_args. The way to specify this would be to use the key step_operator.sagemaker
@step(step_operator="nameofstepoperator", settings= {"step_operator.sagemaker": {"estimator_args": {"instance_type": "m7g.medium"}}})
def my_step():
...
# Using the class
@step(step_operator="nameofstepoperator", settings= {"step_operator.sagemaker": SagemakerStepOperatorSettings(instance_type="m7g.medium")})
def my_step():
...
or in YAML:
steps:
my_step: | how-to | https://docs.zenml.io/how-to/use-configuration-files/runtime-configuration | 399 |
T, ERROR, WARN, INFO (default), DEBUG or CRITICAL.ZENML_STORE_BACKUP_STRATEGY: This variable controls the database backup strategy used by the ZenML server. See the Database backup and recovery section for more details about this feature and other related environment variables. Defaults to in-memory.
ZENML_SERVER_RATE_LIMIT_ENABLED: This variable controls the rate limiting for ZenML API (currently only for the LOGIN endpoint). It is disabled by default, so set it to 1 only if you need to enable rate limiting. To determine unique users a X_FORWARDED_FOR header or request.client.host is used, so before enabling this make sure that your network configuration is associating proper information with your clients in order to avoid disruptions for legitimate requests.
ZENML_SERVER_LOGIN_RATE_LIMIT_MINUTE: If rate limiting is enabled, this variable controls how many requests will be allowed to query the login endpoint in a one minute interval. Set it to a desired integer value; defaults to 5.
ZENML_SERVER_LOGIN_RATE_LIMIT_DAY: If rate limiting is enabled, this variable controls how many requests will be allowed to query the login endpoint in an interval of day interval. Set it to a desired integer value; defaults to 1000.
If none of the ZENML_STORE_* variables are set, the container will default to creating and using an SQLite database file stored at /zenml/.zenconfig/local_stores/default_zen_store/zenml.db inside the container. The /zenml/.zenconfig/local_stores base path where the default SQLite database is located can optionally be overridden by setting the ZENML_LOCAL_STORES_PATH environment variable to point to a different path (e.g. a persistent volume or directory that is mounted from the host).
Secret store environment variables | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 360 |
urns:
Deepchecks test suite execution result
"""# validation pre-processing (e.g. dataset preparation) can take place here
data_validator = DeepchecksDataValidator.get_active_data_validator()
suite = data_validator.data_validation(
dataset=dataset,
check_list=[
DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION,
DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS,
],
# validation post-processing (e.g. interpret results, take actions) can happen here
return suite
The arguments that the Deepchecks Data Validator methods can take in are the same as those used for the Deepchecks standard steps.
Have a look at the complete list of methods and parameters available in the DeepchecksDataValidator API in the SDK docs.
Call Deepchecks directly
You can use the Deepchecks library directly in your custom pipeline steps, and only leverage ZenML's capability of serializing, versioning and storing the SuiteResult objects in its Artifact Store, e.g.:
import pandas as pd
import deepchecks.tabular.checks as tabular_checks
from deepchecks.core.suite import SuiteResult
from deepchecks.tabular import Suite
from deepchecks.tabular import Dataset
from zenml import step
@step
def data_integrity_check(
dataset: pd.DataFrame,
) -> SuiteResult:
"""Custom data integrity check step with Deepchecks
Args:
dataset: a Pandas DataFrame
Returns:
Deepchecks test suite execution result
"""
# validation pre-processing (e.g. dataset preparation) can take place here
train_dataset = Dataset(
dataset,
label='class',
cat_features=['country', 'state']
suite = Suite(name="custom")
check = tabular_checks.OutlierSampleDetection(
nearest_neighbors_percent=0.01,
extent_parameter=3,
check.add_condition_outlier_ratio_less_or_equal(
max_outliers_ratio=0.007,
outlier_score_threshold=0.5,
suite.add(check)
check = tabular_checks.StringLengthOutOfBounds(
num_percentiles=1000,
min_unique_values=3,
check.add_condition_number_of_outliers_less_or_equal(
max_outliers=3, | stack-components | https://docs.zenml.io/stack-components/data-validators/deepchecks | 436 |
ββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨β π kubernetes-cluster β demo-zenml-demos/demo-zenml-terraform-cluster β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββ
The login CLI command can be used to configure the local Kubernetes CLI to access a Kubernetes cluster reachable through an Azure Service Connector:
zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id demo-zenml-demos/demo-zenml-terraform-cluster
Example Command Output
β Attempting to configure local client using service connector 'azure-service-principal'...
Updated local kubeconfig with the cluster details. The current kubectl context was set to 'demo-zenml-terraform-cluster'.
The 'azure-service-principal' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK.
The local Kubernetes CLI can now be used to interact with the Kubernetes cluster:
kubectl cluster-info
Example Command Output
Kubernetes control plane is running at https://demo-43c5776f7.hcp.westeurope.azmk8s.io:443
CoreDNS is running at https://demo-43c5776f7.hcp.westeurope.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://demo-43c5776f7.hcp.westeurope.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
A similar process is possible with ACR container registries:
zenml service-connector verify azure-service-principal --resource-type docker-registry
Example Command Output
β ¦ Verifying service connector 'azure-service-principal'...
Service connector 'azure-service-principal' is correctly configured with valid credentials and has access to the following resources:
ββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β | how-to | https://docs.zenml.io/how-to/auth-management/azure-service-connector | 513 |
β s3://zenml-public-datasets βββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββ
After having set up or decided on an AWS Service Connector to use to connect to the target S3 bucket, you can register the S3 Artifact Store as follows:
# Register the S3 artifact-store and reference the target S3 bucket
zenml artifact-store register <S3_STORE_NAME> -f s3 \
--path='s3://your-bucket'
# Connect the S3 artifact-store to the target bucket via an AWS Service Connector
zenml artifact-store connect <S3_STORE_NAME> -i
A non-interactive version that connects the S3 Artifact Store to a target S3 bucket through an AWS Service Connector:
zenml artifact-store connect <S3_STORE_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml artifact-store connect s3-zenfiles --connector s3-zenfiles
Successfully connected artifact store `s3-zenfiles` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β c4ee3f0a-bc69-4c79-9a74-297b2dd47d50 β s3-zenfiles β πΆ aws β π¦ s3-bucket β s3://zenfiles β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββ
As a final step, you can use the S3 Artifact Store in a ZenML Stack:
# Register and set a stack with the new artifact store
zenml stack register <STACK_NAME> -a <S3_STORE_NAME> ... --set
When you register the S3 Artifact Store, you can generate an AWS access key, store it in a ZenML Secret and then reference it in the Artifact Store configuration.
This method has some advantages over the implicit authentication method:
you don't need to install and configure the AWS CLI on your host | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/s3 | 631 |
onnector describe gcp-auto
Example Command OutputService connector 'gcp-auto' of type 'gcp' with id 'fe16f141-7406-437e-a579-acebe618a293' is owned by user 'default' and is 'private'.
'gcp-auto' gcp Service Connector Details
ββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β fe16f141-7406-437e-a579-acebe618a293 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β gcp-auto β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β π΅ gcp β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β user-account β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π΅ gcp-generic, π¦ gcs-bucket, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β 5eca8f6e-291f-4958-ae2d-a3e847a1ad8a β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 450 |
model, and associated artifacts and run like this:zenml model version list <MODEL_NAME> can be used to list all versions of a particular model.
The following commands can be used to list the various pipeline runs associated with a model:
zenml model version runs <MODEL_NAME> <MODEL_VERSIONNAME>
The following commands can be used to list the various artifacts associated with a model:
zenml model version data_artifacts <MODEL_NAME> <MODEL_VERSIONNAME>
zenml model version model_artifacts <MODEL_NAME> <MODEL_VERSIONNAME>
zenml model version deployment_artifacts <MODEL_NAME> <MODEL_VERSIONNAME>
The ZenML Pro dashboard has additional capabilities, that include visualizing all associated runs and artifacts for a model version:
Fetching the model in a pipeline
When configured at the pipeline or step level, the model will be available through the StepContext or PipelineContext.
from zenml import get_step_context, get_pipeline_context, step, pipeline
@step
def svc_trainer(
X_train: pd.DataFrame,
y_train: pd.Series,
gamma: float = 0.001,
) -> Annotated[ClassifierMixin, "trained_model"]:
# This will return the model specified in the
# @pipeline decorator. In this case, the production version of
# the `iris_classifier` will be returned in this case.
model = get_step_context().model
...
@pipeline(
model=Model(
# The name uniquely identifies this model
name="iris_classifier",
# Pass the stage you want to get the right model
version="production",
),
def training_pipeline(gamma: float = 0.002):
# Now this pipeline will have the production `iris_classifier` model active.
model = get_pipeline_context().model
X_train, X_test, y_train, y_test = training_data_loader()
svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train)
Logging metadata to the Model object
Just as one can associate metadata with artifacts, models too can take a dictionary of key-value pairs to capture their metadata. This is achieved using the log_model_metadata method: | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/track-ml-models | 430 |
={
"count": len(docs_urls),
},
return docs_urlsThe get_all_pages function simply crawls our documentation website and retrieves a unique set of URLs. We've limited it to only scrape the documentation relating to the most recent releases so that we're not mixing old syntax and information with the new. This is a simple way to ensure that we're only ingesting the most relevant and up-to-date information into our pipeline.
We also log the count of those URLs as metadata for the step output. This will be visible in the dashboard for extra visibility around the data that's being ingested. Of course, you can also add more complex logic to this step, such as filtering out certain URLs or adding more metadata.
Once we have our list of URLs, we use the unstructured library to load and parse the pages. This will allow us to use the text without having to worry about the details of the HTML structure and/or markup. This specifically helps us keep the text content as small as possible since we are operating in a constrained environment with LLMs.
from typing import List
from unstructured.partition.html import partition_html
from zenml import step
@step
def web_url_loader(urls: List[str]) -> List[str]:
"""Loads documents from a list of URLs."""
document_texts = []
for url in urls:
elements = partition_html(url=url)
text = "\n\n".join([str(el) for el in elements])
document_texts.append(text)
return document_texts
The previously-mentioned frameworks offer many more options when it comes to data ingestion, including the ability to load documents from a variety of sources, preprocess the text, and extract relevant features. For our purposes, though, we don't need anything too fancy. It also makes our pipeline easier to debug since we can see exactly what's being loaded and how it's being processed. You don't get that same level of visibility with more complex frameworks.
Preprocessing the data | user-guide | https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/data-ingestion | 393 |
Kubernetes Service Connector
Configuring Kubernetes Service Connectors to connect ZenML to Kubernetes clusters.
The ZenML Kubernetes service connector facilitates authenticating and connecting to a Kubernetes cluster. The connector can be used to access to any generic Kubernetes cluster by providing pre-authenticated Kubernetes python clients to Stack Components that are linked to it and also allows configuring the local Kubernetes CLI (i.e. kubectl).
Prerequisites
The Kubernetes Service Connector is part of the Kubernetes ZenML integration. You can either install the entire integration or use a pypi extra to install it independently of the integration:
pip install "zenml[connectors-kubernetes]" installs only prerequisites for the Kubernetes Service Connector Type
zenml integration install kubernetes installs the entire Kubernetes ZenML integration
A local Kubernetes CLI (i.e. kubectl ) and setting up local kubectl configuration contexts is not required to access Kubernetes clusters in your Stack Components through the Kubernetes Service Connector.
$ zenml service-connector list-types --type kubernetes
ββββββββββββββββββββββββββββββββ―ββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β βββββββββββββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββΌββββββββΌβββββββββ¨
β Kubernetes Service Connector β π kubernetes β π kubernetes-cluster β password β β
β β
β
β β β β token β β β
ββββββββββββββββββββββββββββββββ·ββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββ·ββββββββ·βββββββββ
Resource Types
The Kubernetes Service Connector only supports authenticating to and granting access to a generic Kubernetes cluster. This type of resource is identified by the kubernetes-cluster Resource Type. | how-to | https://docs.zenml.io/how-to/auth-management/kubernetes-service-connector | 470 |
"""Base Materializer to realize artifact data."""ASSOCIATED_ARTIFACT_TYPE = ArtifactType.BASE
ASSOCIATED_TYPES = ()
def __init__(
self, uri: str, artifact_store: Optional[BaseArtifactStore] = None
):
"""Initializes a materializer with the given URI.
Args:
uri: The URI where the artifact data will be stored.
artifact_store: The artifact store used to store this artifact.
"""
self.uri = uri
self._artifact_store = artifact_store
def load(self, data_type: Type[Any]) -> Any:
"""Write logic here to load the data of an artifact.
Args:
data_type: The type of data that the artifact should be loaded as.
Returns:
The data of the artifact.
"""
# read from a location inside self.uri
# Example:
# data_path = os.path.join(self.uri, "abc.json")
# with self.artifact_store.open(filepath, "r") as fid:
# return json.load(fid)
...
def save(self, data: Any) -> None:
"""Write logic here to save the data of an artifact.
Args:
data: The data of the artifact to save.
"""
# write `data` into self.uri
# Example:
# data_path = os.path.join(self.uri, "abc.json")
# with self.artifact_store.open(filepath, "w") as fid:
# json.dump(data,fid)
...
def save_visualizations(self, data: Any) -> Dict[str, VisualizationType]:
"""Save visualizations of the given data.
Args:
data: The data of the artifact to visualize.
Returns:
A dictionary of visualization URIs and their types.
"""
# Optionally, define some visualizations for your artifact
# E.g.:
# visualization_uri = os.path.join(self.uri, "visualization.html")
# with self.artifact_store.open(visualization_uri, "w") as f:
# f.write("<html><body>data</body></html>")
# visualization_uri_2 = os.path.join(self.uri, "visualization.png")
# data.save_as_png(visualization_uri_2)
# return {
# visualization_uri: ArtifactVisualizationType.HTML,
# visualization_uri_2: ArtifactVisualizationType.IMAGE
# }
...
def extract_metadata(self, data: Any) -> Dict[str, "MetadataType"]:
"""Extract metadata from the given data. | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types | 484 |
Custom secret stores
Learning how to develop a custom secret store.
The secrets store acts as the one-stop shop for all the secrets to which your pipeline or stack components might need access. It is responsible for storing, updating and deleting only the secrets values for ZenML secrets, while the ZenML secret metadata is stored in the SQL database. The secrets store interface implemented by all available secrets store back-ends is defined in the zenml.zen_stores.secrets_stores.secrets_store_interface core module and looks more or less like this:
class SecretsStoreInterface(ABC):
"""ZenML secrets store interface.
All ZenML secrets stores must implement the methods in this interface.
"""
# ---------------------------------
# Initialization and configuration
# ---------------------------------
@abstractmethod
def _initialize(self) -> None:
"""Initialize the secrets store.
This method is called immediately after the secrets store is created.
It should be used to set up the backend (database, connection etc.).
"""
# ---------
# Secrets
# ---------
@abstractmethod
def store_secret_values(
self,
secret_id: UUID,
secret_values: Dict[str, str],
) -> None:
"""Store secret values for a new secret.
Args:
secret_id: ID of the secret.
secret_values: Values for the secret.
"""
@abstractmethod
def get_secret_values(self, secret_id: UUID) -> Dict[str, str]:
"""Get the secret values for an existing secret.
Args:
secret_id: ID of the secret.
Returns:
The secret values.
Raises:
KeyError: if no secret values for the given ID are stored in the
secrets store.
"""
@abstractmethod
def update_secret_values(
self,
secret_id: UUID,
secret_values: Dict[str, str],
) -> None:
"""Updates secret values for an existing secret.
Args:
secret_id: The ID of the secret to be updated.
secret_values: The new secret values.
Raises:
KeyError: if no secret values for the given ID are stored in the
secrets store.
"""
@abstractmethod | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/custom-secret-stores | 409 |
rue
# The username and password for the database.database_username: user
database_password:
# The URL of the database to use for the ZenML server.
database_url:
# The path to the SSL CA certificate to use for the database connection.
database_ssl_ca:
# The path to the client SSL certificate to use for the database connection.
database_ssl_cert:
# The path to the client SSL key to use for the database connection.
database_ssl_key:
# Whether to verify the database server SSL certificate.
database_ssl_verify_server_cert: true
# The log level to set the terraform client. Choose one of TRACE,
# DEBUG, INFO, WARN, or ERROR (case insensitive).
log_level: ERROR
Feel free to include only those variables that you want to customize, in your file. For all other variables, the default values (shown above) will be used.
Cloud-specific settings
# The AWS region to deploy to.
region: eu-west-1
# The name of the RDS instance to create
rds_name: zenmlserver
# Name of RDS database to create.
db_name: zenmlserver
# Type of RDS database to create.
db_type: mysql
# Version of RDS database to create.
db_version: 5.7.38
# Instance class of RDS database to create.
db_instance_class: db.t3.micro
# Allocated storage of RDS database to create.
db_allocated_storage: 5
The database_username and database_password from the general config is used to set those variables for the AWS RDS instance.
# The project in GCP to deploy the server in.
project_id:
# The GCP region to deploy to.
region: europe-west3
# The name of the CloudSQL instance to create.
cloudsql_name: zenmlserver
# Name of CloudSQL database to create.
db_name: zenmlserver
# Instance class of CloudSQL database to create.
db_instance_tier: db-n1-standard-1
# Allocated storage of CloudSQL database, in GB, to create.
db_disk_size: 10
# Whether or not to enable the Secrets Manager API. Disable this if you
# don't have ListServices permissions on the project.
enable_secrets_manager_api: true
The project_id is required to be set. | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-zenml-cli | 478 |
πββοΈModel Deployers
Deploying your models and serve real-time predictions.
Model Deployment is the process of making a machine learning model available to make predictions and decisions on real-world data. Getting predictions from trained models can be done in different ways depending on the use case, a batch prediction is used to generate predictions for a large amount of data at once, while a real-time prediction is used to generate predictions for a single data point at a time.
Model deployers are stack components responsible for serving models on a real-time or batch basis.
Online serving is the process of hosting and loading machine-learning models as part of a managed web service and providing access to the models through an API endpoint like HTTP or GRPC. Once deployed, model inference can be triggered at any time, and you can send inference requests to the model through the web service's API and receive fast, low-latency responses.
Batch inference or offline inference is the process of making a machine learning model make predictions on a batch of observations. This is useful for generating predictions for a large amount of data at once. The predictions are usually stored as files or in a database for end users or business applications.
When to use it?
The model deployers are optional components in the ZenML stack. They are used to deploy machine learning models to a target environment, either a development (local) or a production (Kubernetes or cloud) environment. The model deployers are mainly used to deploy models for real-time inference use cases. With the model deployers and other stack components, you can build pipelines that are continuously trained and deployed to production.
How model deployers slot into the stack
Here is an architecture diagram that shows how model deployers fit into the overall story of a remote stack.
Model Deployers Flavors | stack-components | https://docs.zenml.io/stack-components/model-deployers | 363 |
at the root URL path.
Secret Store configurationUnless explicitly disabled or configured otherwise, the ZenML server will use the SQL database as a secrets store backend where secret values are stored. If you want to use an external secrets management service like the AWS Secrets Manager, GCP Secrets Manager, Azure Key Vault, HashiCorp Vault or even your custom Secrets Store back-end implementation instead, you need to configure it in the Helm values. Depending on where you deploy your ZenML server and how your Kubernetes cluster is configured, you will also need to provide the credentials needed to access the secrets management service API.
Important: If you are updating the configuration of your ZenML Server deployment to use a different secrets store back-end or location, you should follow the documented secrets migration strategy to minimize downtime and to ensure that existing secrets are also properly migrated.
Using the SQL database as a secrets store backend (default)
The SQL database is used as the default location where the ZenML secrets store keeps the secret values. You only need to configure these options if you want to change the default behavior.
It is particularly recommended to enable encryption at rest for the SQL database if you plan on using it as a secrets store backend. You'll have to configure the secret key used to encrypt the secret values. If not set, encryption will not be used and passwords will be stored unencrypted in the database. This value should be set to a random string with a recommended length of at least 32 characters, e.g.:
generate a random string with Python:
from secrets import token_hex
token_hex(32)
or with OpenSSL:
openssl rand -hex 32
then configure it in the Helm values:
zenml:
# ...
# Secrets store settings. This is used to store centralized secrets.
secretsStore:
# The type of the secrets store
type: sql
# Configuration for the SQL secrets store
sql:
encryptionKey: 0f00e4282a3181be32c108819e8a860a429b613e470ad58531f0730afff64545 | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm | 418 |
y custom tool? How can I extend or build on ZenML?This depends on the tool and its respective MLOps category. We have a full guide on this over here!
How can I contribute?
We develop ZenML together with our community! To get involved, the best way to get started is to select any issue from the good-first-issue label. If you would like to contribute, please review our Contributing Guide for all relevant details.
How can I speak with the community?
The first point of the call should be our Slack group. Ask your questions about bugs or specific use cases and someone from the core team will respond.
Which license does ZenML use?
ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE.md in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.
PreviousCommunity & content
Last updated 19 days ago | reference | https://docs.zenml.io/v/docs/reference/faq | 199 |
MLflow
Deploying your models locally with MLflow.
The MLflow Model Deployer is one of the available flavors of the Model Deployer stack component. Provided with the MLflow integration it can be used to deploy and manage MLflow models on a local running MLflow server.
The MLflow Model Deployer is not yet available for use in production. This is a work in progress and will be available soon. At the moment it is only available for use in a local development environment.
When to use it?
MLflow is a popular open-source platform for machine learning. It's a great tool for managing the entire lifecycle of your machine learning. One of the most important features of MLflow is the ability to package your model and its dependencies into a single artifact that can be deployed to a variety of deployment targets.
You should use the MLflow Model Deployer:
if you want to have an easy way to deploy your models locally and perform real-time predictions using the running MLflow prediction server.
if you are looking to deploy your models in a simple way without the need for a dedicated deployment environment like Kubernetes or advanced infrastructure configuration.
If you are looking to deploy your models in a more complex way, you should use one of the other Model Deployer Flavors available in ZenML.
How do you deploy it?
The MLflow Model Deployer flavor is provided by the MLflow ZenML integration, so you need to install it on your local machine to be able to deploy your models. You can do this by running the following command:
zenml integration install mlflow -y
To register the MLflow model deployer with ZenML you need to run the following command:
zenml model-deployer register mlflow_deployer --flavor=mlflow
The ZenML integration will provision a local MLflow deployment server as a daemon process that will continue to run in the background to serve the latest MLflow model.
How do you use it?
Deploy a logged model | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/mlflow | 398 |
Kubeflow Orchestrator
Orchestrating your pipelines to run on Kubeflow.
The Kubeflow orchestrator is an orchestrator flavor provided by the ZenML kubeflow integration that uses Kubeflow Pipelines to run your pipelines.
This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the Kubeflow orchestrator if:
you're looking for a proven production-grade orchestrator.
you're looking for a UI in which you can track your pipeline runs.
you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.
you're willing to deploy and maintain Kubeflow Pipelines on your cluster.
How to deploy it
The Kubeflow orchestrator supports two different modes: Local and remote. In case you want to run the orchestrator on a local Kubernetes cluster running on your machine, there is no additional infrastructure setup necessary.
If you want to run your pipelines on a remote cluster instead, you'll need to set up a Kubernetes cluster and deploy Kubeflow Pipelines:
Have an existing AWS EKS cluster set up.
Make sure you have the AWS CLI set up.
Download and install kubectl and configure it to talk to your EKS cluster using the following command:Copyaws eks --region REGION update-kubeconfig --name CLUSTER_NAME
Install Kubeflow Pipelines onto your cluster.
( optional) set up an AWS Service Connector to grant ZenML Stack Components easy and secure access to the remote EKS cluster.
Have an existing GCP GKE cluster set up.
Make sure you have the Google Cloud CLI set up first.
Download and install kubectl and configure it to talk to your GKE cluster using the following command:Copygcloud container clusters get-credentials CLUSTER_NAME
Install Kubeflow Pipelines onto your cluster.
( optional) set up a GCP Service Connector to grant ZenML Stack Components easy and secure access to the remote GKE cluster. | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/kubeflow | 423 |
get some immediate feedback logged to the console.This functionality can then be packaged up into a ZenML step once we're happy it does what we need:
@step
def retrieval_evaluation_small() -> Annotated[float, "small_failure_rate_retrieval"]:
failure_rate = test_retrieved_docs_retrieve_best_url(question_doc_pairs)
logging.info(f"Retrieval failure rate: {failure_rate}%")
return failure_rate
We got a 20% failure rate on the first run of this test, which was a good sign that the retrieval component could be improved. We only had 5 test cases, so this was just a starting point. In reality, you'd want to keep adding more test cases to cover a wider range of scenarios. You'll discover these failure cases as you use the system more and more, so it's a good idea to keep a record of them and add them to your test suite.
You'd also want to examine the logs to see exactly which query failed. In our case, checking the logs in the ZenML dashboard, we find the following:
Failed for question: How do I generate embeddings as part of a RAG
pipeline when using ZenML?. Expected URL ending: user-guide/llmops-guide/
rag-with-zenml/embeddings-generation. Got: ['https://docs.zenml.io/user-guide/
llmops-guide/rag-with-zenml/data-ingestion', 'https://docs.zenml.io/user-guide/
llmops-guide/rag-with-zenml/understanding-rag', 'https://docs.zenml.io/v/docs/
user-guide/advanced-guide/data-management/handle-custom-data-types', 'https://docs.
zenml.io/user-guide/llmops-guide/rag-with-zenml', 'https://docs.zenml.io/v/docs/
user-guide/llmops-guide/rag-with-zenml']
We can maybe take a look at those documents to see why they were retrieved and not the one we expected. This is a good way to iteratively improve the retrieval component.
Automated evaluation using synthetic generated queries | user-guide | https://docs.zenml.io/user-guide/llmops-guide/evaluation/retrieval | 430 |
Secret management
Registering and utilizing secrets.
What is a ZenML secret?
ZenML secrets are groupings of key-value pairs which are securely stored in the ZenML secrets store. Additionally, a secret always has a name that allows you to fetch or reference them in your pipelines and stacks.
Centralized secrets store
ZenML provides a centralized secrets management system that allows you to register and manage secrets in a secure way. The metadata of the ZenML secrets (e.g. name, ID, owner, scope etc.) is always stored in the ZenML server database, while the actual secret values are stored and managed separately, through the ZenML Secrets Store. This allows for a flexible deployment strategy that meets the security and compliance requirements of your organization.
In a local ZenML deployment, secret values are also stored in the local SQLite database. When connected to a remote ZenML server, the secret values are stored in the secrets management back-end that the server's Secrets Store is configured to use, while all access to the secrets is done through the ZenML server API.
Currently, the ZenML server can be configured to use one of the following supported secrets store back-ends:
the same SQL database that the ZenML server is using to store secrets metadata as well as other managed objects such as pipelines, stacks, etc. This is the default option.
the AWS Secrets Manager
the GCP Secret Manager
the Azure Key Vault
the HashiCorp Vault
a custom secrets store back-end implementation is also supported
Configuration and deployment
Configuring the specific secrets store back-end that the ZenML server uses is done at deployment time. This involves deciding on one of the supported back-ends and authentication mechanisms and configuring the ZenML server with the necessary credentials to authenticate with the back-end. | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/manage-the-deployed-services/secret-management | 360 |
l version is associated with a model registration.ModelVersionStage: A model version stage is a state in that a model version can be. It can be one of the following: None, Staging, Production, Archived. The model version stage is used to track the lifecycle of a model version. For example, a model version can be in the Staging stage while it is being tested and then moved to the Production stage once it is ready for deployment.
When to use it
ZenML provides a built-in mechanism for storing and versioning pipeline artifacts through its mandatory Artifact Store. While this is a powerful way to manage artifacts programmatically, it can be challenging to use without a visual interface.
Model registries, on the other hand, offer a visual way to manage and track model metadata, particularly when using a remote orchestrator. They make it easy to retrieve and load models from storage, thanks to built-in integrations. A model registry is an excellent choice for interacting with all the models in your pipeline and managing their state in a centralized way.
Using a model registry in your stack is particularly useful if you want to interact with all the logged models in your pipeline, or if you need to manage the state of your models in a centralized way and make it easy to retrieve, load, and deploy these models.
How model registries fit into the ZenML stack
Here is an architecture diagram that shows how a model registry fits into the overall story of a remote stack.
Model Registry Flavors
Model Registries are optional stack components provided by integrations:
Model Registry Flavor Integration Notes MLflow mlflow mlflow Add MLflow as Model Registry to your stack Custom Implementation custom custom
If you would like to see the available flavors of Model Registry, you can use the command:
zenml model-registry flavor list
How to use it | stack-components | https://docs.zenml.io/stack-components/model-registries | 370 |
πConfigure Python environments
Navigating multiple development environments.
ZenML deployments often involve multiple environments. This guide helps you manage dependencies and configurations across these environments.
Here is a visual overview of the different environments:
Client Environment (or the Runner environment)
The client environment (sometimes known as the runner environment) is where the ZenML pipelines are compiled, i.e., where you call the pipeline function (typically in a run.py script). There are different types of client environments:
A local development environment
A CI runner in production.
A ZenML Pro runner.
A runner image orchestrated by the ZenML server to start pipelines.
In all the environments, you should use your preferred package manager (e.g., pip or poetry) to manage dependencies. Ensure you install the ZenML package and any required integrations.
The client environment typically follows these key steps when starting a pipeline:
Compiling an intermediate pipeline representation via the @pipeline function.
Creating or triggering pipeline and step build environments if running remotely.
Triggering a run in the orchestrator.
Please note that the @pipeline function in your code is only ever called in this environment. Therefore, any computational logic that is executed in the pipeline function needs to be relevant to this so-called compile time, rather than at execution time, which happens later.
ZenML Server Environment
The ZenML server environment is a FastAPI application managing pipelines and metadata. It includes the ZenML Dashboard and is accessed when you deploy ZenML. To manage dependencies, install them during ZenML deployment, but only if you have custom integrations, as most are built-in.
See also here for more on configuring the server environment.
Execution Environments | how-to | https://docs.zenml.io/how-to/configure-python-environments | 335 |
Deploy with Helm
Deploying ZenML in a Kubernetes cluster with Helm.
If you wish to manually deploy and manage ZenML in a Kubernetes cluster of your choice, ZenML also includes a Helm chart among its available deployment options.
You can find the chart on this ArtifactHub repository, along with the templates, default values and instructions on how to install it. Read on to find detailed explanations on prerequisites, configuration, and deployment scenarios.
Prerequisites
You'll need the following:
A Kubernetes cluster
Optional, but recommended: a MySQL-compatible database reachable from the Kubernetes cluster (e.g. one of the managed databases offered by Google Cloud, AWS, or Azure). A MySQL server version of 8.0 or higher is required
the Kubernetes client already installed on your machine and configured to access your cluster
Helm installed on your machine
Optional: an external Secrets Manager service (e.g. one of the managed secrets management services offered by Google Cloud, AWS, Azure, or HashiCorp Vault). By default, ZenML stores secrets inside the SQL database that it's connected to, but you also have the option of using an external cloud Secrets Manager service if you already happen to use one of those cloud or service providers
ZenML Helm Configuration
You can start by taking a look at the values.yaml file and familiarize yourself with some of the configuration settings that you can customize for your ZenML deployment.
In addition to tools and infrastructure, you will also need to collect and prepare information related to your database and information related to your external secrets management service to be used for the Helm chart configuration and you may also want to install additional optional services in your cluster.
When you are ready, you can proceed to the installation section.
Collect information from your SQL database service | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm | 355 |
Project templates
Rocketstart your ZenML journey!
What would you need to get a quick understanding of the ZenML framework and start building your ML pipelines? The answer is one of ZenML project templates to cover major use cases of ZenML: a collection of steps and pipelines and, to top it all off, a simple but useful CLI. This is exactly what the ZenML templates are all about!
List of available project templates
Project Template [Short name] Tags Description Starter template [ starter ] basic scikit-learn All the basic ML ingredients you need to get you started with ZenML: parameterized steps, a model training pipeline, a flexible configuration and a simple CLI. All created around a representative and versatile model training use-case implemented with the scikit-learn library. E2E Training with Batch Predictions [ e2e_batch ] etl hp-tuning model-promotion drift-detection batch-prediction scikit-learn This project template is a good starting point for anyone starting with ZenML. It consists of two pipelines with the following high-level steps: load, split, and preprocess data; run HP tuning; train and evaluate model performance; promote model to production; detect data drift; run batch inference. NLP Training Pipeline [ nlp ] nlp hp-tuning model-promotion training pytorch gradio huggingface This project template is a simple NLP training pipeline that walks through tokenization, training, HP tuning, evaluation and deployment for a BERT or GPT-2 based model and testing locally it with gradio
Do you have a personal project powered by ZenML that you would like to see here? At ZenML, we are looking for design partnerships and collaboration to help us better understand the real-world scenarios in which MLOps is being used and to build the best possible experience for our users. If you are interested in sharing all or parts of your project with us in the form of a ZenML project template, please join our Slack and leave us a message! | how-to | https://docs.zenml.io/how-to/setting-up-a-project-repository/using-project-templates | 407 |
:
zenml integration install great_expectations -yDepending on how you configure the Great Expectations Data Validator, it can reduce or even completely eliminate the complexity associated with setting up the store backends for Great Expectations. If you're only looking for a quick and easy way of adding Great Expectations to your stack and are not concerned with the configuration details, you can simply run:
# Register the Great Expectations data validator
zenml data-validator register ge_data_validator --flavor=great_expectations
# Register and set a stack with the new data validator
zenml stack register custom_stack -dv ge_data_validator ... --set
If you already have a Great Expectations deployment, you can configure the Great Expectations Data Validator to reuse or even replace your current configuration. You should consider the pros and cons of every deployment use-case and choose the one that best fits your needs:
let ZenML initialize and manage the Great Expectations configuration. The Artifact Store will serve as a storage backend for all the information that Great Expectations needs to persist (e.g. Expectation Suites, Validation Results). However, you will not be able to setup new Data Sources, Metadata Stores or Data Docs sites. Any changes you try and make to the configuration through code will not be persisted and will be lost when your pipeline completes or your local process exits.
use ZenML with your existing Great Expectations configuration. You can tell ZenML to replace your existing Metadata Stores with the active ZenML Artifact Store by setting the configure_zenml_stores attribute in the Data Validator. The downside is that you will only be able to run pipelines locally with this setup, given that the Great Expectations configuration is a file on your local machine. | stack-components | https://docs.zenml.io/stack-components/data-validators/great-expectations | 347 |
Fetching pipelines
Inspecting a finished pipeline run and its outputs.
Once a pipeline run has been completed, we can access the corresponding information in code, which enables the following:
Loading artifacts like models or datasets saved by previous runs
Accessing metadata or configurations of previous runs
Programmatically inspecting the lineage of pipeline runs and their artifacts
The hierarchy of pipelines, runs, steps, and artifacts is as follows:
As you can see from the diagram, there are many layers of 1-to-N relationships.
Let us investigate how to traverse this hierarchy level by level:
Pipelines
Get a pipeline via the client
After you have run a pipeline at least once, you can also fetch the pipeline via the Client.get_pipeline() method.
from zenml.client import Client
pipeline_model = Client().get_pipeline("first_pipeline")
Check out the ZenML Client Documentation for more information on the Client class and its purpose.
Discover and list all pipelines
If you're not sure which pipeline you need to fetch, you can find a list of all registered pipelines in the ZenML dashboard, or list them programmatically either via the Client or the CLI.
You can use the Client.list_pipelines() method to get a list of all pipelines registered in ZenML:
from zenml.client import Client
pipelines = Client().list_pipelines()
Alternatively, you can also list pipelines with the following CLI command:
zenml pipeline list
Runs
Each pipeline can be executed many times, resulting in several Runs.
Get all runs of a pipeline
You can get a list of all runs of a pipeline using the runs property of the pipeline:
runs = pipeline_model.runs
The result will be a list of the most recent runs of this pipeline, ordered from newest to oldest.
Alternatively, you can also use the pipeline_model.get_runs() method which allows you to specify detailed parameters for filtering or pagination. See the ZenML SDK Docs for more information.
Get the last run of a pipeline | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/fetching-pipelines | 398 |
chatbot, you might need to evaluate the following:Are the retrieved documents relevant to the query?
Is the generated answer coherent and helpful for your specific use case?
Does the generated answer contain hate speech or any sort of toxic language?
These are just examples, and the specific metrics and methods you use will depend on your use case. The generation evaluation functions as an end-to-end evaluation of the RAG pipeline, as it checks the final output of the system. It's during these end-to-end evaluations that you'll have most leeway to use subjective metrics, as you're evaluating the system as a whole.
Before we dive into the details, let's take a moment to look at a short high-level code example showcasing the two main areas of evaluation. Afterwards the following sections will cover the two main areas of evaluation in more detail as well as offer practical guidance on when to run these evaluations and what to look for in the results.
PreviousBasic RAG inference pipeline
NextEvaluation in 65 lines of code
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/llmops-guide/evaluation | 210 |
w_bucket=gs://my_bucket --provider=<YOUR_PROVIDER>You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
Authentication Methods
You need to configure the following credentials for authentication to a remote MLflow tracking server:
tracking_uri: The URL pointing to the MLflow tracking server. If using an MLflow Tracking Server managed by Databricks, then the value of this attribute should be "databricks".
tracking_username: Username for authenticating with the MLflow tracking server.
tracking_password: Password for authenticating with the MLflow tracking server.
tracking_token (in place of tracking_username and tracking_password): Token for authenticating with the MLflow tracking server.
tracking_insecure_tls (optional): Set to skip verifying the MLflow tracking server SSL certificate.
databricks_host: The host of the Databricks workspace with the MLflow-managed server to connect to. This is only required if the tracking_uri value is set to "databricks". More information: Access the MLflow tracking server from outside Databricks
Either tracking_token or tracking_username and tracking_password must be specified.
This option configures the credentials for the MLflow tracking service directly as stack component attributes.
This is not recommended for production settings as the credentials won't be stored securely and will be clearly visible in the stack configuration.
# Register the MLflow experiment tracker
zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \
--tracking_uri=<URI> --tracking_token=<token>
# You can also register it like this:
# zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \
# --tracking_uri=<URI> --tracking_username=<USERNAME> --tracking_password=<PASSWORD>
# Register and set a stack with the new experiment tracker | stack-components | https://docs.zenml.io/v/docs/stack-components/experiment-trackers/mlflow | 400 |