page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
Whylogs
How to collect and visualize statistics to track changes in your pipelines' data with whylogs/WhyLabs profiling.
The whylogs/WhyLabs Data Validator flavor provided with the ZenML integration uses whylogs and WhyLabs to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation.
When would you want to use it?
Whylogs is an open-source library that analyzes your data and creates statistical summaries called whylogs profiles. Whylogs profiles can be processed in your pipelines and visualized locally or uploaded to the WhyLabs platform, where more in depth analysis can be carried out. Even though whylogs also supports other data types, the ZenML whylogs integration currently only works with tabular data in pandas.DataFrame format.
You should use the whylogs/WhyLabs Data Validator when you need the following data validation features that are possible with whylogs and WhyLabs:
Data Quality: validate data quality in model inputs or in a data pipeline
Data Drift: detect data drift in model input features
Model Drift: Detect training-serving skew, concept drift, and model performance degradation
You should consider one of the other Data Validator flavors if you need a different set of data validation features.
How do you deploy it?
The whylogs Data Validator flavor is included in the whylogs ZenML integration, you need to install it on your local machine to be able to register a whylogs Data Validator and add it to your stack:
zenml integration install whylogs -y
If you don't need to connect to the WhyLabs platform to upload and store the generated whylogs data profiles, the Data Validator stack component does not require any configuration parameters. Adding it to a stack is as simple as running e.g.: | stack-components | https://docs.zenml.io/stack-components/data-validators/whylogs | 382 |
📃Use configuration files
ZenML makes it easy to configure and run a pipeline with configuration files.
ZenML pipelines can be configured at runtime with a simple YAML file that can help you set parameters, control caching behavior or even configure different stack components.
Learn more about the different options in the following sections:
What can be configured
Configuration hierarchy
Autogenerate a template yaml file
PreviousGet past pipeline/step runs
NextHow to configure a pipeline with a YAML
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-configuration-files | 103 |
┃┗━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
$ zenml model-deployer models get-url 8cbe671b-9fce-4394-a051-68e001f92765
Prediction URL of Served Model 8cbe671b-9fce-4394-a051-68e001f92765 is:
http://abb84c444c7804aa98fc8c097896479d-377673393.us-east-1.elb.amazonaws.com/seldon/zenml-workloads/zenml-8cbe67
1b-9fce-4394-a051-68e001f92765/api/v0.1/predictions
$ zenml model-deployer models delete 8cbe671b-9fce-4394-a051-68e001f92765
In Python, you can alternatively discover the prediction URL of a deployed model by inspecting the metadata of the step that deployed the model:
from zenml.client import Client
pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>")
deployer_step = pipeline_run.steps["<NAME_OF_MODEL_DEPLOYER_STEP>"]
deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value
The ZenML integrations that provide Model Deployer stack components also include standard pipeline steps that can directly be inserted into any pipeline to achieve a continuous model deployment workflow. These steps take care of all the aspects of continuously deploying models to an external server and saving the Service configuration into the Artifact Store, where they can be loaded at a later time and re-create the initial conditions used to serve a particular model.
PreviousDevelop a custom experiment tracker
NextMLflow
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/model-deployers | 413 |
tive Directory credentials or generic OIDC tokens.This authentication method only requires a GCP workload identity external account JSON file that only contains the configuration for the external account without any sensitive credentials. It allows implementing a two layer authentication scheme that keeps the set of permissions associated with implicit credentials down to the bare minimum and grants permissions to the privilege-bearing GCP service account instead.
This authentication method can be used to authenticate to GCP services using credentials from other cloud providers or identity providers. When used with workloads running on AWS or Azure, it involves automatically picking up credentials from the AWS IAM or Azure AD identity associated with the workload and using them to authenticate to GCP services. This means that the result depends on the environment where the ZenML server is deployed and is thus not fully reproducible.
When used with AWS or Azure implicit in-cloud authentication, this method may constitute a security risk, because it can give users access to the identity (e.g. AWS IAM role or Azure AD principal) implicitly associated with the environment where the ZenML server is running. For this reason, all implicit authentication methods are disabled by default and need to be explicitly enabled by setting the ZENML_ENABLE_IMPLICIT_AUTH_METHODS environment variable or the helm chart enableImplicitAuthMethods configuration option to true in the ZenML deployment.
By default, the GCP connector generates temporary OAuth 2.0 tokens from the external account credentials and distributes them to clients. The tokens have a limited lifetime of 1 hour. This behavior can be disabled by setting the generate_temporary_tokens configuration option to False, in which case, the connector will distribute the external account credentials JSON to clients instead (not recommended). | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 335 |
lementation_class(self) -> Type[BaseStepOperator]:"""Returns the implementation class for this flavor."""
This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. In order to see the full implementation and get the complete docstrings, please check the SDK docs .
Build your own custom step operator
If you want to create your own custom flavor for a step operator, you can follow the following steps:
Create a class that inherits from the BaseStepOperator class and implement the abstract launch method. This method has two main responsibilities:Preparing a suitable execution environment (e.g. a Docker image): The general environment is highly dependent on the concrete step operator implementation, but for ZenML to be able to run the step it requires you to install some pip dependencies. The list of requirements needed to successfully execute the step can be found via the Docker settings info.pipeline.docker_settings passed to the launch() method. Additionally, you'll have to make sure that all the source code of your ZenML step and pipeline are available within this execution environment.Running the entrypoint command: Actually running a single step of a pipeline requires knowledge of many ZenML internals and is implemented in the zenml.step_operators.step_operator_entrypoint_configuration module. As long as your environment was set up correctly (see the previous bullet point), you can run the step using the command provided via the entrypoint_command argument of the launch() method.
If your step operator allows the specification of per-step resources, make sure to handle the resources defined on the step (info.config.resource_settings) that was passed to the launch() method.
If you need to provide any configuration, create a class that inherits from the BaseStepOperatorConfig class adds your configuration parameters. | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/custom | 352 |
Slack Alerter
Sending automated alerts to a Slack channel.
The SlackAlerter enables you to send messages to a dedicated Slack channel directly from within your ZenML pipelines.
The slack integration contains the following two standard steps:
slack_alerter_post_step takes a string message or a custom Slack block, posts it to a Slack channel, and returns whether the operation was successful.
slack_alerter_ask_step also posts a message or a custom Slack block to a Slack channel, but waits for user feedback, and only returns True if a user explicitly approved the operation from within Slack (e.g., by sending "approve" / "reject" to the bot in response).
Interacting with Slack from within your pipelines can be very useful in practice:
The slack_alerter_post_step allows you to get notified immediately when failures happen (e.g., model performance degradation, data drift, ...),
The slack_alerter_ask_step allows you to integrate a human-in-the-loop into your pipelines before executing critical steps, such as deploying new models.
How to use it
Requirements
Before you can use the SlackAlerter, you first need to install ZenML's slack integration:
zenml integration install slack -y
See the Integrations page for more details on ZenML integrations and how to install and use them.
Setting Up a Slack Bot
In order to use the SlackAlerter, you first need to have a Slack workspace set up with a channel that you want your pipelines to post to.
Then, you need to create a Slack App with a bot in your workspace.
Make sure to give your Slack bot chat:write and chat:write.public permissions in the OAuth & Permissions tab under Scopes.
Registering a Slack Alerter in ZenML
Next, you need to register a slack alerter in ZenML and link it to the bot you just created. You can do this with the following command:
zenml alerter register slack_alerter \
--flavor=slack \
--slack_token=<SLACK_TOKEN> \
--default_slack_channel_id=<SLACK_CHANNEL_ID> | stack-components | https://docs.zenml.io/v/docs/stack-components/alerters/slack | 430 |
oud_container_registry --connector cloud_connectorWith the components registered, everything is set up for the next steps.
For more information, you can always check the dedicated Skypilot orchestrator guide.
In order to launch a pipeline on Azure with the SkyPilot orchestrator, the first thing that you need to do is to install the Azure and SkyPilot integrations:
zenml integration install azure skypilot_azure -y
Before we start registering any components, there is another step that we have to execute. As we explained in the previous section, components such as orchestrators and container registries often require you to set up the right permissions. In ZenML, this process is simplified with the use of Service Connectors. For this example, we will need to use the Service Principal authentication feature of our Azure service connector:
zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id=<TENANT_ID> --client_id=<CLIENT_ID> --client_secret=<CLIENT_SECRET>
Once the service connector is set up, we can register a Skypilot orchestrator:
zenml orchestrator register skypilot_orchestrator -f vm_azure
zenml orchestrator connect skypilot_orchestrator --connect cloud_connector
The next step is to register an Azure container registry. Similar to the orchestrator, we will use our connector as we are setting up the container registry.
zenml container-registry register cloud_container_registry -f azure --uri=<REGISTRY_NAME>.azurecr.io
zenml container-registry connect cloud_container_registry --connector cloud_connector
With the components registered, everything is set up for the next steps.
For more information, you can always check the dedicated Skypilot orchestrator guide.
Having trouble with setting up infrastructure? Try reading the stack deployment section of the docs to gain more insight. If that still doesn't work, join the ZenML community and ask!
Running a pipeline on a cloud stack | user-guide | https://docs.zenml.io/user-guide/production-guide/cloud-orchestration | 402 |
as it provides up to 4 vCPUs and 16 GB of memory.Docker image for the Spark drivers and executors
When you want to run your steps on a Kubernetes cluster, Spark will require you to choose a base image for the driver and executor pods. Normally, for this purpose, you can either use one of the base images in Spark’s dockerhub or create an image using the docker-image-tool which will use your own Spark installation and build an image.
When using Spark in EKS, you need to use the latter and utilize the docker-image-tool. However, before the build process, you also need to download the following packages
hadoop-aws = 3.3.1
aws-java-sdk-bundle = 1.12.150
and put them in the jars folder within your Spark installation. Once that is set up, you can build the image as follows:
cd $SPARK_HOME # If this empty for you then you need to set the SPARK_HOME variable which points to your Spark installation
SPARK_IMAGE_TAG=<SPARK_IMAGE_TAG>
./bin/docker-image-tool.sh -t $SPARK_IMAGE_TAG -p kubernetes/dockerfiles/spark/bindings/python/Dockerfile -u 0 build
BASE_IMAGE_NAME=spark-py:$SPARK_IMAGE_TAG
If you are working on an M1 Mac, you will need to build the image for the amd64 architecture, by using the prefix -X on the previous command. For example:
./bin/docker-image-tool.sh -X -t $SPARK_IMAGE_TAG -p kubernetes/dockerfiles/spark/bindings/python/Dockerfile -u 0 build
Configuring RBAC
Additionally, you may need to create the several resources in Kubernetes in order to give Spark access to edit/manage your driver executor pods.
To do so, create a file called rbac.yaml with the following content:
apiVersion: v1
kind: Namespace
metadata:
name: spark-namespace
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: spark-service-account
namespace: spark-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: spark-role
namespace: spark-namespace
subjects:
kind: ServiceAccount
name: spark-service-account
namespace: spark-namespace
roleRef:
kind: ClusterRole | stack-components | https://docs.zenml.io/stack-components/step-operators/spark-kubernetes | 489 |
dashboard.
Warning! Usage in remote orchestratorsThe current ZenML version has a limitation in its base Docker image that requires a workaround for all pipelines using Deepchecks with a remote orchestrator (e.g. Kubeflow , Vertex). The limitation being that the base Docker image needs to be extended to include binaries that are required by opencv2, which is a package that Deepchecks requires.
While these binaries might be available on most operating systems out of the box (and therefore not a problem with the default local orchestrator), we need to tell ZenML to add them to the containerization step when running in remote settings. Here is how:
First, create a file called deepchecks-zenml.Dockerfile and place it on the same level as your runner script (commonly called run.py). The contents of the Dockerfile are as follows:
ARG ZENML_VERSION=0.20.0
FROM zenmldocker/zenml:${ZENML_VERSION} AS base
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
Then, place the following snippet above your pipeline definition. Note that the path of the dockerfile are relative to where the pipeline definition file is. Read the containerization guide for more details:
import zenml
from zenml import pipeline
from zenml.config import DockerSettings
from pathlib import Path
import sys
docker_settings = DockerSettings(
dockerfile="deepchecks-zenml.Dockerfile",
build_options={
"buildargs": {
"ZENML_VERSION": f"{zenml.__version__}"
},
},
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
# same code as always
...
From here on, you can continue to use the deepchecks integration as is explained below.
The Deepchecks standard steps
ZenML wraps the Deepchecks functionality for tabular data in the form of four standard steps:
DeepchecksDataIntegrityCheckStep: use it in your pipelines to run data integrity tests on a single dataset
DeepchecksDataDriftCheckStep: use it in your pipelines to run data drift tests on two datasets as input: target and reference. | stack-components | https://docs.zenml.io/stack-components/data-validators/deepchecks | 444 |
🪆Use the Model Control Plane
A Model is simply an entity that groups pipelines, artifacts, metadata, and other crucial business data into a unified entity. A ZenML Model is a concept that more broadly encapsulates your ML products business logic. You may even think of a ZenML Model as a "project" or a "workspace"
Please note that one of the most common artifacts that is associated with a Model in ZenML is the so-called technical model, which is the actually model file/files that holds the weight and parameters of a machine learning training result. However, this is not the only artifact that is relevant; artifacts such as the training data and the predictions this model produces in production are also linked inside a ZenML Model.
Models are first-class citizens in ZenML and as such viewing and using them is unified and centralized in the ZenML API, client as well as on the ZenML Cloud dashboard.
A Model captures lineage information and more. Within a Model, different Model versions can be staged. For example, you can rely on your predictions at a specific stage, like Production, and decide whether the Model version should be promoted based on your business rules during training. Plus, accessing data from other Models and their versions is just as simple.
The Model Control Plane is how you manage your models through this unified interface. It allows you to combine the logic of your pipelines, artifacts and crucial business data along with the actual 'technical model'.
To see an end-to-end example, please refer to the starter guide.
PreviousDisabling visualizations
NextRegistering a Model
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane | 326 |
ation file,
# and it is evaluated to `production`@pipeline
def my_pipeline(environment: str):
...
if __name__=="__main__":
my_pipeline.with_options(config_paths="config.yaml")()
There might be conflicting settings for step or pipeline inputs, while working with YAML configuration files. Such situations happen when you define a step or a pipeline parameter in the configuration file and override it from the code later on. Don't worry - once it happens you will be informed with details and instructions how to fix. Example of such a conflict:
# config.yaml
parameters:
some_param: 24
steps:
my_step:
parameters:
input_2: 42
# run.py
from zenml import step, pipeline
@step
def my_step(input_1: int, input_2: int) -> None:
pass
@pipeline
def my_pipeline(some_param: int):
# here an error will be raised since `input_2` is
# `42` in config, but `43` was provided in the code
my_step(input_1=42, input_2=43)
if __name__=="__main__":
# here an error will be raised since `some_param` is
# `24` in config, but `23` was provided in the code
my_pipeline(23)
Parameters and caching
When an input is passed as a parameter, the step will only be cached if all parameter values are exactly the same as for previous executions of the step.
Artifacts and caching
When an artifact is used as a step function input, the step will only be cached if all the artifacts are exactly the same as for previous executions of the step. This means that if any of the upstream steps that produce the input artifacts for a step were not cached, the step itself will always be executed.
See Also:
Use configuration files to set parameters
How caching works and how to control it
PreviousBuild a pipeline
NextStep output typing and annotation
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/use-pipeline-step-parameters | 414 |
turns:
The Docker image repo digest or name.
"""This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. In order to see the full implementation and get the complete docstrings, please check the source code on GitHub .
Build your own custom image builder
If you want to create your own custom flavor for an image builder, you can follow the following steps:
Create a class that inherits from the BaseImageBuilder class and implement the abstract build method. This method should use the given build context and build a Docker image with it. If additionally a container registry is passed to the build method, the image builder is also responsible for pushing the image there.
If you need to provide any configuration, create a class that inherits from the BaseImageBuilderConfig class and adds your configuration parameters.
Bring both the implementation and the configuration together by inheriting from the BaseImageBuilderFlavor class. Make sure that you give a name to the flavor through its abstract property.
Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation:
zenml image-builder flavor register <path.to.MyImageBuilderFlavor>
For example, if your flavor class MyImageBuilderFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml image-builder flavor register flavors.my_flavor.MyImageBuilderFlavor
ZenML resolves the flavor class by taking the path where you initialized zenml (via zenml init) as the starting point of resolution. Therefore, please ensure you follow the best practice of initializing zenml at the root of your repository.
If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually it's better to not have to rely on this mechanism, and initialize zenml at the root.
Afterward, you should see the new flavor in the list of available flavors: | stack-components | https://docs.zenml.io/v/docs/stack-components/image-builders/custom | 398 |
S Secrets Manager accounts or regions may be used.Always make sure that the backup Secrets Store is configured to use a different location than the primary Secrets Store. The location can be different in terms of the Secrets Store back-end type (e.g. internal database vs. AWS Secrets Manager) or the actual location of the Secrets Store back-end (e.g. different AWS Secrets Manager account or region, GCP Secret Manager project or Azure Key Vault's vault).
Using the same location for both the primary and backup Secrets Store will not provide any additional benefits and may even result in unexpected behavior.
When a backup secrets store is in use, the ZenML Server will always attempt to read and write secret values from/to the primary Secrets Store first while ensuring to keep the backup Secrets Store in sync. If the primary Secrets Store is unreachable, if the secret values are not found there or any otherwise unexpected error occurs, the ZenML Server falls back to reading and writing from/to the backup Secrets Store. Only if the backup Secrets Store is also unavailable, the ZenML Server will return an error.
In addition to the hidden backup operations, users can also explicitly trigger a backup operation by using the zenml secret backup CLI command. This command will attempt to read all secrets from the primary Secrets Store and write them to the backup Secrets Store. Similarly, the zenml secret restore CLI command can be used to restore secrets from the backup Secrets Store to the primary Secrets Store. These CLI commands are useful for migrating secrets from one Secrets Store to another.
Secrets migration strategy
Sometimes you may need to change the external provider or location where secrets values are stored by the Secrets Store. The immediate implication of this is that the ZenML server will no longer be able to access existing secrets with the new configuration until they are also manually copied to the new location. Some examples of such changes include: | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/secret-management | 373 |
object that can be used to access any AWS service.they support multiple authentication methods. Some of these allow clients direct access to long-lived, broad-access credentials and are only recommended for local development use. Others support distributing temporary API tokens automatically generated from long-lived credentials, which are safer for production use-cases, but may be more difficult to set up. A few authentication methods even support down-scoping the permissions of temporary API tokens so that they only allow access to the target resource and restrict access to everything else. This is covered at length in the section on best practices for authentication methods.
there is flexibility regarding the range of resources that a single cloud provider Service Connector instance configured with a single set of credentials can be scoped to access:a multi-type Service Connector instance can access any type of resources from the range of supported Resource Typesa multi-instance Service Connector instance can access multiple resources of the same typea single-instance Service Connector instance is scoped to access a single resource
The following output shows three different Service Connectors configured from the same GCP Service Connector Type using three different scopes but with the same credentials:
a multi-type GCP Service Connector that allows access to every possible resource accessible with the configured credentials
a multi-instance GCS Service Connector that allows access to multiple GCS buckets
a single-instance GCS Service Connector that only permits access to one GCS bucket
$ zenml service-connector list
┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓
┃ ACTIVE │ NAME │ ID │ TYPE │ RESOURCE TYPES │ RESOURCE NAME │ SHARED │ OWNER │ EXPIRES IN │ LABELS ┃ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 424 |
token_hex
token_hex(32)or:Copyopenssl rand -hex 32Important: If you configure encryption for your SQL database secrets store, you should keep the ZENML_SECRETS_STORE_ENCRYPTION_KEY value somewhere safe and secure, as it will always be required by the ZenML server to decrypt the secrets in the database. If you lose the encryption key, you will not be able to decrypt the secrets in the database and will have to reset them.
These configuration options are only relevant if you're using the AWS Secrets Manager as the secrets store backend.
ZENML_SECRETS_STORE_TYPE: Set this to aws in order to set this type of secret store.
The AWS Secrets Store uses the ZenML AWS Service Connector under the hood to authenticate with the AWS Secrets Manager API. This means that you can use any of the authentication methods supported by the AWS Service Connector to authenticate with the AWS Secrets Manager API.
"Version": "2012-10-17",
"Statement": [
"Sid": "ZenMLSecretsStore",
"Effect": "Allow",
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:PutSecretValue",
"secretsmanager:TagResource",
"secretsmanager:DeleteSecret"
],
"Resource": "arn:aws:secretsmanager:<AWS-region>:<AWS-account-id>:secret:zenml/*"
The following configuration options are supported:
ZENML_SECRETS_STORE_AUTH_METHOD: The AWS Service Connector authentication method to use (e.g. secret-key or iam-role).
ZENML_SECRETS_STORE_AUTH_CONFIG: The AWS Service Connector configuration, in JSON format (e.g. {"aws_access_key_id":"<aws-key-id>","aws_secret_access_key":"<aws-secret-key>","region":"<aws-region>"}).
Note: The remaining configuration options are deprecated and may be removed in a future release. Instead, you should set the ZENML_SECRETS_STORE_AUTH_METHOD and ZENML_SECRETS_STORE_AUTH_CONFIG variables to use the AWS Service Connector authentication method. | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 440 |
Run on GCP
A simple guide to quickly set up a minimal stack on GCP.
The GCP integration currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration.
This page aims to quickly set up a minimal production stack on GCP. With just a few simple steps you will set up a service account with specifically-scoped permissions that ZenML can use to authenticate with the relevant GCP resources.
While this guide focuses on Google Cloud, we are seeking contributors to create a similar guide for other cloud providers. If you are interested, please create a pull request over on GitHub.
1) Choose a GCP project
In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Make sure a billing account is attached to this project to allow the use of some APIs.
This is how you would do it from the CLI if this is preferred.
gcloud projects create <PROJECT_ID> --billing-project=<BILLING_PROJECT>
If you don't plan to keep the resources that you create in this procedure, create a new project. After you finish these steps, you can delete the project, thereby removing all resources associated with the project.
2) Enable GCloud APIs
The following APIs will need to be enabled within your chosen GCP project.
Cloud Functions API # For the vertex orchestrator
Cloud Run Admin API # For the vertex orchestrator
Cloud Build API # For the container registry
Artifact Registry API # For the container registry
Cloud Logging API # Generally needed
3) Create a dedicated service account
The service account should have these following roles.
AI Platform Service Agent
Storage Object Admin
These roles give permissions for full CRUD on storage objects and full permissions for compute within VertexAI.
4) Create a JSON Key for your service account | how-to | https://docs.zenml.io/how-to/popular-integrations/gcp-guide | 395 |
Develop a custom experiment tracker
Learning how to develop a custom experiment tracker.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Base abstraction in progress!
We are actively working on the base abstraction for the Experiment Tracker, which will be available soon. As a result, their extension is not recommended at the moment. When you are selecting an Experiment Tracker for your stack, you can use one of the existing flavors.
If you need to implement your own Experiment Tracker flavor, you can still do so, but keep in mind that you may have to refactor it when the base abstraction is released.
Build your own custom experiment tracker
If you want to create your own custom flavor for an experiment tracker, you can follow the following steps:
Create a class that inherits from the BaseExperimentTracker class and implements the abstract methods.
If you need any configuration, create a class that inherits from the BaseExperimentTrackerConfig class and add your configuration parameters.
Bring both the implementation and the configuration together by inheriting from the BaseExperimentTrackerFlavor class.
Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation:
zenml experiment-tracker flavor register <path.to.MyExperimentTrackerFlavor>
For example, if your flavor class MyExperimentTrackerFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFlavor | stack-components | https://docs.zenml.io/stack-components/experiment-trackers/custom | 328 |
o authenticate with your cloud provider of choice.We need first to install the SkyPilot integration for AWS and the AWS connectors extra, using the following two commands:
pip install "zenml[connectors-aws]"
zenml integration install aws skypilot_aws
To provision VMs on AWS, your VM Orchestrator stack component needs to be configured to authenticate with AWS Service Connector. To configure the AWS Service Connector, you need to register a new service connector configured with AWS credentials that have at least the minimum permissions required by SkyPilot as documented here.
First, check that the AWS service connector type is available using the following command:
zenml service-connector list-types --type aws
┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓
┃ NAME │ TYPE │ RESOURCE TYPES │ AUTH METHODS │ LOCAL │ REMOTE ┃
┠───────────────────────┼────────┼───────────────────────┼──────────────────┼───────┼────────┨
┃ AWS Service Connector │ 🔶 aws │ 🔶 aws-generic │ implicit │ ✅ │ ➖ ┃
┃ │ │ 📦 s3-bucket │ secret-key │ │ ┃
┃ │ │ 🌀 kubernetes-cluster │ sts-token │ │ ┃
┃ │ │ 🐳 docker-registry │ iam-role │ │ ┃
┃ │ │ │ session-token │ │ ┃
┃ │ │ │ federation-token │ │ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛
Next, configure a service connector using the CLI or the dashboard with the AWS credentials. For example, the following command uses the local AWS CLI credentials to auto-configure the service connector:
zenml service-connector register aws-skypilot-vm --type aws --region=us-east-1 --auto-configure | stack-components | https://docs.zenml.io/stack-components/orchestrators/skypilot-vm | 517 |
RAG in 85 lines of code
Learn how to implement a RAG pipeline in just 85 lines of code.
There's a lot of theory and context to think about when it comes to RAG, but let's start with a quick implementation in code to motivate what follows. The following 85 lines do the following:
load some data (a fictional dataset about 'ZenML World') as our corpus
process that text (split it into chunks and 'tokenize' it (i.e. split into words))
take a query as input and find the most relevant chunks of text from our corpus data
use OpenAI's GPT-3.5 model to answer the question based on the relevant chunks
import os
import re
import string
from openai import OpenAI
def preprocess_text(text):
text = text.lower()
text = text.translate(str.maketrans("", "", string.punctuation))
text = re.sub(r"\s+", " ", text).strip()
return text
def tokenize(text):
return preprocess_text(text).split()
def retrieve_relevant_chunks(query, corpus, top_n=2):
query_tokens = set(tokenize(query))
similarities = []
for chunk in corpus:
chunk_tokens = set(tokenize(chunk))
similarity = len(query_tokens.intersection(chunk_tokens)) / len(
query_tokens.union(chunk_tokens)
similarities.append((chunk, similarity))
similarities.sort(key=lambda x: x[1], reverse=True)
return [chunk for chunk, _ in similarities[:top_n]]
def answer_question(query, corpus, top_n=2):
relevant_chunks = retrieve_relevant_chunks(query, corpus, top_n)
if not relevant_chunks:
return "I don't have enough information to answer the question."
context = "\n".join(relevant_chunks)
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
chat_completion = client.chat.completions.create(
messages=[
"role": "system",
"content": f"Based on the provided context, answer the following question: {query}\n\nContext:\n{context}",
},
"role": "user",
"content": query,
},
],
model="gpt-3.5-turbo",
return chat_completion.choices[0].message.content.strip()
# Sci-fi themed corpus about "ZenML World"
corpus = [ | user-guide | https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/rag-85-loc | 475 |
Finetuning LLMs with ZenML
Finetune LLMs for specific tasks or to improve performance and cost.
🚧 This guide is a work in progress. Please check back soon for updates.
PreviousImprove retrieval by finetuning embeddings
NextSet up a project repository
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/llmops-guide/finetuning-llms | 67 |
same credentials across multiple stack components.If you don't already have a GCP Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure a GCP Service Connector that can be used to access more than one GCS bucket or even more than one type of GCP resource:
zenml service-connector register --type gcp -i
A non-interactive CLI example that leverages the Google Cloud CLI configuration on your local machine to auto-configure a GCP Service Connector targeting a single GCS bucket is:
zenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type gcs-bucket --resource-name <GCS_BUCKET_NAME> --auto-configure
Example Command Output
$ zenml service-connector register gcs-zenml-bucket-sl --type gcp --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl --auto-configure
⠸ Registering service connector 'gcs-zenml-bucket-sl'...
Successfully registered service connector `gcs-zenml-bucket-sl` with access to the following resources:
┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓
┃ RESOURCE TYPE │ RESOURCE NAMES ┃
┠───────────────┼──────────────────────┨
┃ 📦 gcs-bucket │ gs://zenml-bucket-sl ┃
┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┛
Note: Please remember to grant the entity associated with your GCP credentials permissions to read and write to your GCS bucket as well as to list accessible GCS buckets. For a full list of permissions required to use a GCP Service Connector to access one or more GCS buckets, please refer to the GCP Service Connector GCS bucket resource type documentation or read the documentation available in the interactive CLI commands and dashboard. The GCP Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case. | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/gcp | 448 |
verified with multiple scopes after registration.First, listing the Service Connectors will clarify which scopes they are configured with:
zenml service-connector list
Example Command Output
┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓
┃ ACTIVE │ NAME │ ID │ TYPE │ RESOURCE TYPES │ RESOURCE NAME │ SHARED │ OWNER │ EXPIRES IN │ LABELS ┃
┠────────┼───────────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨
┃ │ aws-multi-type │ 373a73c2-8295-45d4-a768-45f5a0f744ea │ 🔶 aws │ 🔶 aws-generic │ <multiple> │ ➖ │ default │ │ ┃
┃ │ │ │ │ 📦 s3-bucket │ │ │ │ │ ┃
┃ │ │ │ │ 🌀 kubernetes-cluster │ │ │ │ │ ┃
┃ │ │ │ │ 🐳 docker-registry │ │ │ │ │ ┃
┠────────┼───────────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨
┃ │ aws-s3-multi-instance │ fa9325ab-ce01-4404-aec3-61a3af395d48 │ 🔶 aws │ 📦 s3-bucket │ <multiple> │ ➖ │ default │ │ ┃
┠────────┼───────────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨
┃ │ aws-s3-zenfiles │ 19edc05b-92db-49de-bc84-aa9b3fb8261a │ 🔶 aws │ 📦 s3-bucket │ s3://zenfiles │ ➖ │ default │ │ ┃ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 601 |
support.
Upgrading your ZenML Server on HF SpacesThe default space will use the latest version of ZenML automatically. If you want to update your version, you can simply select the 'Factory reboot' option within the 'Settings' tab of the space. Note that this will wipe any data contained within the space and so if you are not using a MySQL persistent database (as described above) you will lose any data contained within your ZenML deployment on the space. You can also configure the space to use an earlier version by updating the Dockerfile's FROM import statement at the very top.
PreviousDeploy with Helm
NextDeploy with custom images
Last updated 15 days ago | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-using-huggingface-spaces | 136 |
┃┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ ID │ 273d2812-2643-4446-82e6-6098b8ccdaa4 ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ NAME │ azure-service-principal ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ TYPE │ 🇦 azure ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ AUTH METHOD │ service-principal ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ RESOURCE TYPES │ 🇦 azure-generic, 📦 blob-container, 🌀 kubernetes-cluster, 🐳 docker-registry ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ RESOURCE NAME │ <multiple> ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ SECRET ID │ 50d9f230-c4ea-400e-b2d7-6b52ba2a6f90 ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ SESSION DURATION │ N/A ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨
┃ EXPIRES IN │ N/A ┃
┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 384 |
Generation evaluation
Evaluate the generation component of your RAG pipeline.
Now that we have a sense of how to evaluate the retrieval component of our RAG pipeline, let's move on to the generation component. The generation component is responsible for generating the answer to the question based on the retrieved context. At this point, our evaluation starts to move into more subjective territory. It's harder to come up with metrics that can accurately capture the quality of the generated answers. However, there are some things we can do.
As with the retrieval evaluation, we can start with a simple approach and then move on to more sophisticated methods.
Handcrafted evaluation tests
As in the retrieval evaluation, we can start by putting together a set of examples where we know that our generated output should or shouldn't include certain terms. For example, if we're generating answers to questions about which orchestrators ZenML supports, we can check that the generated answers include terms like "Airflow" and "Kubeflow" (since we do support them) and exclude terms like "Flyte" or "Prefect" (since we don't (yet!) support them). These handcrafted tests should be driven by mistakes that you've already seen in the RAG output. The negative example of "Flyte" and "Prefect" showing up in the list of supported orchestrators, for example, shows up sometimes when you use GPT 3.5 as the LLM.
As another example, when you make a query asking 'what is the default orchestrator in ZenML?' you would expect that the answer would include the word 'local', so we can make a test case to confirm that.
You can view our starter set of these tests here. It's better to start with something small and simple and then expand as is needed. There's no need for complicated harnesses or frameworks at this stage.
bad_answers table: | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/generation | 384 |
─────────────────────────────────────────────────┨┃ b551f3ae-1448-4f36-97a2-52ce303f20c9 │ kube-auto │ 🌀 kubernetes │ 🌀 kubernetes-cluster │ A5F8F4142FB12DDCDE9F21F6E9B07A18.gr7.us-east-1.eks.amazonaws.com ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
Every Service Connector Type defines its own rules for how Resource Names are formatted. These rules are documented in the section belonging each resource type. For example:
zenml service-connector describe-type aws --resource-type docker-registry
Example Command Output
╔══════════════════════════════════════════════════════════════════════════════╗
║ 🐳 AWS ECR container registry (resource type: docker-registry) ║
╚══════════════════════════════════════════════════════════════════════════════╝
Authentication methods: implicit, secret-key, sts-token, iam-role,
session-token, federation-token
Supports resource instances: False
Authentication methods:
🔒 implicit
🔒 secret-key
🔒 sts-token
🔒 iam-role
🔒 session-token
🔒 federation-token
Allows users to access one or more ECR repositories as a standard Docker
registry resource. When used by Stack Components, they are provided a
pre-authenticated python-docker client instance.
The configured credentials must have at least the following AWS IAM permissions
associated with the ARNs of one or more ECR repositories that the connector will
be allowed to access (e.g. arn:aws:ecr:{region}:{account}:repository/*
represents all the ECR repositories available in the target AWS region).
ecr:DescribeRegistry
ecr:DescribeRepositories
ecr:ListRepositories
ecr:BatchGetImage
ecr:DescribeImages
ecr:BatchCheckLayerAvailability
ecr:GetDownloadUrlForLayer
ecr:InitiateLayerUpload
ecr:UploadLayerPart
ecr:CompleteLayerUpload
ecr:PutImage
ecr:GetAuthorizationToken
This resource type is not scoped to a single ECR repository. Instead, a | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 608 |
url: <The URL of the ZenML server>
verify_ssl: |<Either a boolean, in which case it controls whether the
server's TLS certificate is verified, or a string, in which case it
must be a path to a CA certificate bundle to use or the CA bundle
value itself>
Example of a ZenML server YAML configuration file:
url: https://ac8ef63af203226194a7725ee71d85a-7635928635.us-east-1.elb.amazonaws.com/zenml
verify_ssl: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
To disconnect from the current ZenML server and revert to using the local default database, use the following command:
zenml disconnect
ZenML Helm Deployment Scenarios
This section covers some common Helm deployment scenarios for ZenML.
Minimal deployment
The example below is a minimal configuration for a ZenML server deployment that uses a temporary SQLite database and a ClusterIP service that is not exposed to the internet:
zenml:
ingress:
enabled: false
Once deployed, you have to use port-forwarding to access the ZenML server and to connect to it from your local machine:
kubectl -n zenml-server port-forward svc/zenml-server 8080:8080
zenml connect --url=http://localhost:8080
This is just a simple example only fit for testing and evaluation purposes. For production deployments, you should use an external database and an Ingress service with TLS certificates to secure and expose the ZenML server to the internet.
Basic deployment with local database
This deployment use-case still uses a local database, but it exposes the ZenML server to the internet using an Ingress service with TLS certificates generated by the cert-manager and signed by Let's Encrypt.
First, you need to install cert-manager and nginx-ingress in your Kubernetes cluster. You can use the following commands to install them with their default configuration:
helm repo add jetstack https://charts.jetstack.io
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm | 428 |
a pre-authenticated python-docker client instance.The configured credentials must have at least the following AWS IAM permissions associated with the ARNs of one or more ECR repositories that the connector will be allowed to access (e.g. arn:aws:ecr:{region}:{account}:repository/* represents all the ECR repositories available in the target AWS region).
ecr:DescribeRegistry
ecr:DescribeRepositories
ecr:ListRepositories
ecr:BatchGetImage
ecr:DescribeImages
ecr:BatchCheckLayerAvailability
ecr:GetDownloadUrlForLayer
ecr:InitiateLayerUpload
ecr:UploadLayerPart
ecr:CompleteLayerUpload
ecr:PutImage
ecr:GetAuthorizationToken
If you are using the AWS IAM role, Session Token, or Federation Token authentication methods, you don't have to worry too much about restricting the permissions of the AWS credentials that you use to access the AWS cloud resources. These authentication methods already support automatically generating temporary tokens with permissions down-scoped to the minimum required to access the target resource.
This resource type is not scoped to a single ECR repository. Instead, a connector configured with this resource type will grant access to all the ECR repositories that the credentials are allowed to access under the configured AWS region (i.e. all repositories under the Docker registry URL https://{account-id}.dkr.ecr.{region}.amazonaws.com).
The resource name associated with this resource type uniquely identifies an ECR registry using one of the following formats (the repository name is ignored, only the registry URL/ARN is used):
ECR repository URI (canonical resource name):
[https://]{account}.dkr.ecr.{region}.amazonaws.com[/{repository-name}]
ECR repository ARN :
arn:aws:ecr:{region}:{account-id}:repository[/{repository-name}]
ECR repository names are region scoped. The connector can only be used to access ECR repositories in the AWS region that it is configured to use.
Authentication Methods
Implicit authentication | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 418 |
y on your machine or running remotely as a server.All metadata is now stored, tracked, and managed by ZenML itself. The Metadata Store stack component type and all its implementations have been deprecated and removed. It is no longer possible to register them or include them in ZenML stacks. This is a key architectural change in ZenML 0.20.0 that further improves usability, reproducibility and makes it possible to visualize and manage all your pipelines and pipeline runs in the new ZenML Dashboard.
The architecture changes for the local case are shown in the diagram below:
The architecture changes for the remote case are shown in the diagram below:
If you're already using ZenML, aside from the above limitation, this change will impact you differently, depending on the flavor of Metadata Stores you have in your stacks:
if you're using the default sqlite Metadata Store flavor in your stacks, you don't need to do anything. ZenML will automatically switch to using its local database instead of your sqlite Metadata Stores when you update to 0.20.0 (also see how to migrate your stacks).
if you're using the kubeflow Metadata Store flavor only as a way to connect to the local Kubeflow Metadata Service (i.e. the one installed by the kubeflow Orchestrator in a local k3d Kubernetes cluster), you also don't need to do anything explicitly. When you migrate your stacks to ZenML 0.20.0, ZenML will automatically switch to using its local database.
if you're using the kubeflow Metadata Store flavor to connect to a remote Kubeflow Metadata Service such as those provided by a Kubeflow installation running in AWS, Google or Azure, there is currently no equivalent in ZenML 0.20.0. You'll need to deploy a ZenML Server instance close to where your Kubeflow service is running (e.g. in the same cloud region).
if you're using the mysql Metadata Store flavor to connect to a remote MySQL database service (e.g. a managed AWS, GCP or Azure MySQL service), you'll have to deploy a ZenML Server instance connected to that same database. | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-twenty | 440 |
s(model2.predict(data))
return predictions
@stepdef load_data() -> pd.DataFrame:
# load inference data
...
@pipeline
def do_predictions():
# get specific artifact version
model_42 = Client().get_artifact_version("trained_model", version="42")
metric_42 = model_42.run_metadata["MSE"].value
# get latest artifact version
model_latest = Client().get_artifact_version("trained_model")
metric_latest = model_latest.run_metadata["MSE"].value
inference_data = load_data()
predict(
model1=model_42,
model2=model_latest,
model1_metric=metric_42,
model2_metric=metric_latest,
data=inference_data,
if __name__ == "__main__":
do_predictions()
Here, we enriched the predict step logic with a metric comparison by MSE metric, so predictions are done on the best possible model. We also added a load_data step to load the inference data.
As before, calls like Client().get_artifact_version("trained_model", version="42") or model_latest.run_metadata["MSE"].value are not evaluating the actual objects behind them at pipeline compilation time. Rather, they do so only at the point of step execution. By doing so, we ensure that the latest version is actually the latest at the moment and not just the latest at the point of pipeline compilation.
PreviousHandle custom data types
NextPassing artifacts between pipelines
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/load-artifacts-into-memory | 297 |
┃┗━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
$ zenml model-deployer models get-url 8cbe671b-9fce-4394-a051-68e001f92765
Prediction URL of Served Model 8cbe671b-9fce-4394-a051-68e001f92765 is:
http://abb84c444c7804aa98fc8c097896479d-377673393.us-east-1.elb.amazonaws.com/seldon/zenml-workloads/zenml-8cbe67
1b-9fce-4394-a051-68e001f92765/api/v0.1/predictions
$ zenml model-deployer models delete 8cbe671b-9fce-4394-a051-68e001f92765
In Python, you can alternatively discover the prediction URL of a deployed model by inspecting the metadata of the step that deployed the model:
from zenml.client import Client
pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>")
deployer_step = pipeline_run.steps["<NAME_OF_MODEL_DEPLOYER_STEP>"]
deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value
The ZenML integrations that provide Model Deployer stack components also include standard pipeline steps that can directly be inserted into any pipeline to achieve a continuous model deployment workflow. These steps take care of all the aspects of continuously deploying models to an external server and saving the Service configuration into the Artifact Store, where they can be loaded at a later time and re-create the initial conditions used to serve a particular model.
PreviousDevelop a custom experiment tracker
NextMLflow
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers | 413 |
message = checker(results)
validation_pipeline()As can be seen from the step definition , the step takes in a pandas.DataFrame dataset and a boolean condition and it returns a Great Expectations CheckpointResult object. The boolean condition is only used as a means of ordering steps in a pipeline (e.g. if you must force it to run only after the data profiling step generates an Expectation Suite):
@step
def great_expectations_validator_step(
dataset: pd.DataFrame,
expectation_suite_name: str,
data_asset_name: Optional[str] = None,
action_list: Optional[List[Dict[str, Any]]] = None,
exit_on_error: bool = False,
) -> CheckpointResult:
You can view the complete list of configuration parameters in the SDK docs.
Call Great Expectations directly
You can use the Great Expectations library directly in your custom pipeline steps, while leveraging ZenML's capability of serializing, versioning and storing the ExpectationSuite and CheckpointResult objects in its Artifact Store. To use the Great Expectations configuration managed by ZenML while interacting with the Great Expectations library directly, you need to use the Data Context managed by ZenML instead of the default one provided by Great Expectations, e.g.:
import great_expectations as ge
from zenml.integrations.great_expectations.data_validators import (
GreatExpectationsDataValidator
import pandas as pd
from great_expectations.core import ExpectationSuite
from zenml import step
@step
def create_custom_expectation_suite(
) -> ExpectationSuite:
"""Custom step that creates an Expectation Suite
Returns:
An Expectation Suite
"""
context = GreatExpectationsDataValidator.get_data_context()
# instead of:
# context = ge.get_context()
expectation_suite_name = "custom_suite"
suite = context.create_expectation_suite(
expectation_suite_name=expectation_suite_name
expectation_configuration = ExpectationConfiguration(...)
suite.add_expectation(expectation_configuration=expectation_configuration)
...
context.save_expectation_suite( | stack-components | https://docs.zenml.io/stack-components/data-validators/great-expectations | 411 |
│ az://demo-zenmlartifactstore ┃┠───────────────────────┼───────────────────────────────────────────────┨
┃ 🌀 kubernetes-cluster │ demo-zenml-demos/demo-zenml-terraform-cluster ┃
┠───────────────────────┼───────────────────────────────────────────────┨
┃ 🐳 docker-registry │ demozenmlcontainerregistry.azurecr.io ┃
┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
```
register and connect an Azure Blob Storage Artifact Store Stack Component to an Azure blob container:Copyzenml artifact-store register azure-demo --flavor azure --path=az://demo-zenmlartifactstore
Example Command Output
```
Successfully registered artifact_store `azure-demo`.
```
```sh
zenml artifact-store connect azure-demo --connector azure-service-principal
```
Example Command Output
```
Successfully connected artifact store `azure-demo` to the following resources:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃
┠──────────────────────────────────────┼─────────────────────────┼────────────────┼───────────────────┼──────────────────────────────┨
┃ f2316191-d20b-4348-a68b-f5e347862196 │ azure-service-principal │ 🇦 azure │ 📦 blob-container │ az://demo-zenmlartifactstore ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
```
register and connect a Kubernetes Orchestrator Stack Component to an AKS cluster:Copyzenml orchestrator register aks-demo-cluster --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads
Example Command Output
```
Successfully registered orchestrator `aks-demo-cluster`.
```
```sh
zenml orchestrator connect aks-demo-cluster --connector azure-service-principal
```
Example Command Output
``` | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 596 |
""
train_dataloader, test_dataloader = importer()model = trainer(train_dataloader)
accuracy = evaluator(test_dataloader=test_dataloader, model=model)
bento = bento_builder(model=model)
@pipeline
def local_deploy_pipeline(
bento_loader,
deployer,
):
"""Link all the steps and artifacts together"""
bento = bento_loader()
deployer(deploy_decision=decision, bento=bento)
Predicting with the local deployed model
Once the model has been deployed we can use the BentoML client to send requests to the deployed model. ZenML will automatically create a BentoML client for you and you can use it to send requests to the deployed model by simply calling the service to predict the method and passing the input data and the API function name.
The following example shows how to use the BentoML client to send requests to the deployed model.
@step
def predictor(
inference_data: Dict[str, List],
service: BentoMLDeploymentService,
) -> None:
"""Run an inference request against the BentoML prediction service.
Args:
service: The BentoML service.
data: The data to predict.
"""
service.start(timeout=10) # should be a NOP if already started
for img, data in inference_data.items():
prediction = service.predict("predict_ndarray", np.array(data))
result = to_labels(prediction[0])
rich_print(f"Prediction for {img} is {result}")
Deploying and testing locally is a great way to get started and test your model. However, a real-world scenario will most likely require you to deploy your model to a remote environment. The next section will show you how to deploy the Bento you built with ZenML pipelines to a cloud environment using the bentoctl CLI.
From Local to Cloud with bentoctl
Bentoctl helps deploy any machine learning models as production-ready API endpoints into the cloud. It is a command line tool that provides a simple interface to manage your BentoML bundles.
The bentoctl CLI provides a list of operators which are plugins that interact with cloud services, some of these operators are:
AWS Lambda | stack-components | https://docs.zenml.io/stack-components/model-deployers/bentoml | 436 |
# This will build the Docker image the first timepython run.py --training-pipeline
# This will skip Docker building
python run.py --training-pipeline
You can read more about the ZenML Git Integration here.
PreviousConfigure your pipeline to add compute
NextSet up CI/CD
Last updated 19 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/connect-code-repository | 67 |
Retrieval evaluation
See how the retrieval component responds to changes in the pipeline.
The retrieval component of our RAG pipeline is responsible for finding relevant documents or document chunks to feed into the generation component. In this section we'll explore how to evaluate the performance of the retrieval component of your RAG pipeline. We're checking how accurate the semantic search is, or in other words how relevant the retrieved documents are to the query.
Our retrieval component takes the incoming query and converts it into a vector or embedded representation that can be used to search for relevant documents. We then use this representation to search through a corpus of documents and retrieve the most relevant ones.
Manual evaluation using handcrafted queries
The most naive and simple way to check this would be to handcraft some queries where we know the specific documents needed to answer it. We can then check if the retrieval component is able to retrieve these documents. This is a manual evaluation process and can be time-consuming, but it's a good way to get a sense of how well the retrieval component is working. It can also be useful to target known edge cases or difficult queries to see how the retrieval component handles those known scenarios.
Implementing this is pretty simple - you just need to create some queries and check the retrieved documents. Having tested the basic inference of our RAG setup quite a bit, there were some clear areas where the retrieval component could be improved. I looked in our documentation to find some examples where the information could only be found in a single page and then wrote some queries that would require the retrieval component to find that page. For example, the query "How do I get going with the Label Studio integration? What are the first steps?" would require the retrieval component to find the Label Studio integration page. Some of the other examples used are: | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/retrieval | 362 |
apply the backup if the automatic recovery fails.The following example shows how to deploy the ZenML server to use a mounted host directory to persist the database backup file during a database migration:
mkdir mysql-data
docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password \
--mount type=bind,source=$PWD/mysql-data,target=/var/lib/mysql \
mysql:8.0
docker run -it -d -p 8080:8080 --name zenml \
--add-host host.docker.internal:host-gateway \
--mount type=bind,source=$PWD/mysql-data,target=/db-dump \
--env ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml \
--env ZENML_STORE_BACKUP_STRATEGY=dump-file \
--env ZENML_STORE_BACKUP_DIRECTORY=/db-dump \
zenmldocker/zenml-server
Troubleshooting
You can check the logs of the container to verify if the server is up and, depending on where you have deployed it, you can also access the dashboard at a localhost port (if running locally) or through some other service that exposes your container to the internet.
CLI Docker deployments
If you used the zenml up --docker CLI command to deploy the Docker ZenML server, you can check the logs with the command:
zenml logs -f
Manual Docker deployments
If you used the docker run command to manually deploy the Docker ZenML server, you can check the logs with the command:
docker logs zenml -f
If you used the docker compose command to manually deploy the Docker ZenML server, you can check the logs with the command:
docker compose -p zenml logs -f
PreviousDeploy with ZenML CLI
NextDeploy with Helm
Last updated 14 days ago | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 372 |
Connect in with your User (interactive)
You can authenticate your clients with the ZenML Server using the ZenML CLI and the web based login. This can be executed with the command:
zenml connect --url https://...
This command will start a series of steps to validate the device from where you are connecting that will happen in your browser. You can choose whether to mark your respective device as trusted or not. If you choose not to click Trust this device, a 24-hour token will be issued for authentication services. Choosing to trust the device will issue a 30-day token instead.
To see all devices you've permitted, use the following command:
zenml authorized-device list
Additionally, the following command allows you to more precisely inspect one of these devices:
zenml authorized-device describe <DEVICE_ID>
For increased security, you can invalidate a token using the zenml device lock command followed by the device ID. This helps provide an extra layer of security and control over your devices.
zenml authorized-device lock <DEVICE_ID>
To keep things simple, we can summarize the steps:
Use the zenml connect --url command to start a device flow and connect to a zenml server.
Choose whether to trust the device when prompted.
Check permitted devices with zenml devices list.
Invalidate a token with zenml device lock ....
Important notice
Using the ZenML CLI is a secure and comfortable way to interact with your ZenML tenants. It's important to always ensure that only trusted devices are used to maintain security and privacy.
Don't forget to manage your device trust levels regularly for optimal security. Should you feel a device trust needs to be revoked, lock the device immediately. Every token issued is a potential gateway to access your data, secrets and infrastructure.
PreviousConnect to a server
NextConnect with a Service Account
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/connecting-to-zenml/connect-in-with-your-user-interactive | 373 |
_NAME --key=AWS_ACCESS_KEY --secret=AWS_SECRET_KEYa ZenML secret can hold the AWS credentials and then be referenced in the S3 Artifact Store configuration attributes:Copyzenml secret create aws --aws_access_key_id=AWS_ACCESS_KEY --aws_secret_access_key=AWS_SECRET_KEY
zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME --key='{{aws.aws_access_key_id}}' --secret='{{aws.aws_secret_access_key}}'
an even better version is to reference the secret itself in the S3 Artifact Store configuration:Copyzenml secret create aws --aws_access_key_id=AWS_ACCESS_KEY --aws_secret_access_key=AWS_SECRET_KEY
zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME --authentication_secret=aws
All these options work, but they have many drawbacks:
first of all, not all Stack Components support referencing secrets in their configuration attributes, so this is not a universal solution.
some Stack Components, like those linked to Kubernetes clusters, rely on credentials being set up on the machine where the pipeline is running, which makes pipelines less portable and more difficult to set up. In other cases, you also need to install and set up cloud-specific SDKs and CLIs to be able to use the Stack Component.
people configuring and using Stack Components linked to cloud resources need to be given access to cloud credentials, or even provision the credentials themselves, which requires access to the cloud provider platform and knowledge about how to do it.
in many cases, you can only configure long-lived credentials directly in Stack Components. This is a security risk because they can inadvertently grant access to key resources and services to a malicious party if they are compromised. Implementing a process that rotates credentials regularly is a complex task that requires a lot of effort and maintenance. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management | 376 |
gistry or even more than one type of AWS resource:zenml service-connector register --type aws -i
A non-interactive CLI example that leverages the AWS CLI configuration on your local machine to auto-configure an AWS Service Connector targeting an ECR registry is:
zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type docker-registry --auto-configure
Example Command Output
$ zenml service-connector register aws-us-east-1 --type aws --resource-type docker-registry --auto-configure
⠸ Registering service connector 'aws-us-east-1'...
Successfully registered service connector `aws-us-east-1` with access to the following resources:
┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ RESOURCE TYPE │ RESOURCE NAMES ┃
┠────────────────────┼──────────────────────────────────────────────┨
┃ 🐳 docker-registry │ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃
┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
Note: Please remember to grant the entity associated with your AWS credentials permissions to read and write to one or more ECR repositories as well as to list accessible ECR repositories. For a full list of permissions required to use an AWS Service Connector to access an ECR registry, please refer to the AWS Service Connector ECR registry resource type documentation or read the documentation available in the interactive CLI commands and dashboard. The AWS Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.
If you already have one or more AWS Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the ECR registry you want to use for your AWS Container Registry by running e.g.:
zenml service-connector list-resources --connector-type aws --resource-type docker-registry
Example Command Output | stack-components | https://docs.zenml.io/stack-components/container-registries/aws | 460 |
_mapping=column_mapping,
Let's break this down...We configure the evidently_report_step using parameters that you would normally pass to the Evidently Report object to configure and run an Evidently report. It consists of the following fields:
column_mapping: This is an EvidentlyColumnMapping object that is the exact equivalent of the ColumnMapping object in Evidently. It is used to describe the columns in the dataset and how they should be treated (e.g. as categorical, numerical, or text features).
metrics: This is a list of EvidentlyMetricConfig objects that are used to configure the metrics that should be used to generate the report in a declarative way. This is the same as configuring the metrics that go in the Evidently Report.
download_nltk_data: This is a boolean that is used to indicate whether the NLTK data should be downloaded. This is only needed if you are using Evidently reports that handle text data, which require the NLTK data to be downloaded ahead of time.
There are several ways you can reference the Evidently metrics when configuring EvidentlyMetricConfig items:
by class name: this is the easiest way to reference an Evidently metric. You can use the name of a metric or metric preset class as it appears in the Evidently documentation (e.g."DataQualityPreset", "DatasetDriftMetric").
by full class path: you can also use the full Python class path of the metric or metric preset class ( e.g. "evidently.metric_preset.DataQualityPreset", "evidently.metrics.DatasetDriftMetric"). This is useful if you want to use metrics or metric presets that are not included in Evidently library.
by passing in the class itself: you can also import and pass in an Evidently metric or metric preset class itself, e.g.:Copyfrom evidently.metrics import DatasetDriftMetric
...
evidently_report_step.with_options(
parameters=dict(
metrics=[EvidentlyMetricConfig.metric(DatasetDriftMetric)]
),
) | stack-components | https://docs.zenml.io/stack-components/data-validators/evidently | 420 |
GitHub Container Registry
Storing container images in GitHub.
The GitHub container registry is a container registry flavor that comes built-in with ZenML and uses the GitHub Container Registry to store container images.
When to use it
You should use the GitHub container registry if:
one or more components of your stack need to pull or push container images.
you're using GitHub for your projects. If you're not using GitHub, take a look at the other container registry flavors.
How to deploy it
The GitHub container registry is enabled by default when you create a GitHub account.
How to find the registry URI
The GitHub container registry URI should have the following format:
ghcr.io/<USER_OR_ORGANIZATION_NAME>
# Examples:
ghcr.io/zenml
ghcr.io/my-username
ghcr.io/my-organization
To figure our the URI for your registry:
Use the GitHub user or organization name to fill the template ghcr.io/<USER_OR_ORGANIZATION_NAME> and get your URI.
How to use it
To use the GitHub container registry, we need:
Docker installed and running.
The registry URI. Check out the previous section on the URI format and how to get the URI for your registry.
Our Docker client configured, so it can pull and push images. Follow this guide to create a personal access token and login to the container registry.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=github \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME>
For more information and a full list of configurable attributes of the GitHub container registry, check out the SDK Docs .
PreviousAzure Container Registry
NextDevelop a custom container registry
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/github | 371 |
Linking model binaries/data to a Model
Artifacts generated during pipeline runs can be linked to models in ZenML. This connecting of artifacts provides lineage tracking and transparency into what data and models are used during training, evaluation, and inference.
There are a few ways to link artifacts:
Configuring the Model at a pipeline level
The easiest way is to configure the model parameter on the @pipeline decorator or @step decorator:
from zenml import Model, pipeline
model = Model(
name="my_model",
version="1.0.0"
@pipeline(model=model)
def my_pipeline():
...
This will automatically link all artifacts from this pipeline run to the specified model configuration.
Controlling artifact types and linkage
A ZenML model supports linking three types of artifacts:
Data artifacts: These are the default artifacts. If nothing is specified, all artifacts are grouped under this category.
Model artifacts: If there is a physical model artifact like a .pkl file or a model neural network weights file, it should be grouped in this category.
Deployment artifacts: These artifacts are to do with artifacts related to the endpoints and deployments of the models.
You can also explicitly specify the linkage on a per-artifact basis by passing a special configuration to the Annotated output:
from zenml import step, ArtifactConfig
from typing import Tuple
from typing_extensions import Annotated
import pandas as pd
@step
def svc_trainer(
X_train: pd.DataFrame,
y_train: pd.Series,
gamma: float = 0.001,
) -> Tuple[
# This third argument marks this as a Model Artifact
Annotated[ClassifierMixin, ArtifactConfig("trained_model", is_model_artifact=True)],
# This third argument marks this as a Data Artifact
Annotated[str, ArtifactConfig("deployment_uri", is_deployment_artifact=True)],
]:
... | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/linking-model-binaries-data-to-models | 373 |
strator_url"].value
Specifying per-step resourcesIf your steps require the orchestrator to execute them on specific hardware, you can specify them on your steps as described here.
If your orchestrator of choice or the underlying hardware doesn't support this, you can also take a look at step operators.
PreviousOverview
NextLocal Orchestrator
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators | 76 |
-registry │ iam-role │ │ ┃┃ │ │ │ session-token │ │ ┃
┃ │ │ │ federation-token │ │ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛
```
Register a multi-type AWS Service Connector using auto-configurationCopyAWS_PROFILE=connectors zenml service-connector register aws-demo-multi --type aws --auto-configure
Example Command Output
```text
⠼ Registering service connector 'aws-demo-multi'...
Successfully registered service connector `aws-demo-multi` with access to the following resources:
┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ RESOURCE TYPE │ RESOURCE NAMES ┃
┠───────────────────────┼──────────────────────────────────────────────┨
┃ 🔶 aws-generic │ us-east-1 ┃
┠───────────────────────┼──────────────────────────────────────────────┨
┃ 📦 s3-bucket │ s3://zenfiles ┃
┃ │ s3://zenml-demos ┃
┃ │ s3://zenml-generative-chat ┃
┠───────────────────────┼──────────────────────────────────────────────┨
┃ 🌀 kubernetes-cluster │ zenhacks-cluster ┃
┠───────────────────────┼──────────────────────────────────────────────┨
┃ 🐳 docker-registry │ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃
┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
```
**NOTE**: from this point forward, we don't need the local AWS CLI credentials or the local AWS CLI at all. The steps that follow can be run on any machine regardless of whether it has been configured and authorized to access the AWS platform or not. | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 521 |
tracker register wandb_tracker \
--flavor=wandb \--entity={{wandb_secret.entity}} \
--project_name={{wandb_secret.project_name}} \
--api_key={{wandb_secret.api_key}}
...
Read more about ZenML Secrets in the ZenML documentation.
For more, up-to-date information on the Weights & Biases Experiment Tracker implementation and its configuration, you can have a look at the SDK docs .
How do you use it?
To be able to log information from a ZenML pipeline step using the Weights & Biases Experiment Tracker component in the active stack, you need to enable an experiment tracker using the @step decorator. Then use Weights & Biases logging or auto-logging capabilities as you would normally do, e.g.:
import wandb
from wandb.integration.keras import WandbCallback
@step(experiment_tracker="<WANDB_TRACKER_STACK_COMPONENT_NAME>")
def tf_trainer(
config: TrainerConfig,
x_train: np.ndarray,
y_train: np.ndarray,
x_val: np.ndarray,
y_val: np.ndarray,
) -> tf.keras.Model:
...
model.fit(
x_train,
y_train,
epochs=config.epochs,
validation_data=(x_val, y_val),
callbacks=[
WandbCallback(
log_evaluation=True,
validation_steps=16,
validation_data=(x_val, y_val),
],
metric = ...
wandb.log({"<METRIC_NAME>": metric})
Instead of hardcoding an experiment tracker name, you can also use the Client to dynamically use the experiment tracker of your active stack:
from zenml.client import Client
experiment_tracker = Client().active_stack.experiment_tracker
@step(experiment_tracker=experiment_tracker.name)
def tf_trainer(...):
...
Weights & Biases UI
Weights & Biases comes with a web-based UI that you can use to find further details about your tracked experiments.
Every ZenML step that uses Weights & Biases should create a separate experiment run which you can inspect in the Weights & Biases UI:
You can find the URL of the Weights & Biases experiment linked to a specific ZenML run via the metadata of the step in which the experiment tracker was used:
from zenml.client import Client | stack-components | https://docs.zenml.io/stack-components/experiment-trackers/wandb | 452 |
otated[int, "remainder"]
]:
return a // b, a % bIf you do not give your outputs custom names, the created artifacts will be named {pipeline_name}::{step_name}::output or {pipeline_name}::{step_name}::output_{i} in the dashboard. See the documentation on artifact versioning and configuration for more information.
See Also:
Learn more about output annotation here
For custom data types you should check these docs out
PreviousUse pipeline/step parameters
NextControl caching behavior
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/build-pipelines/step-output-typing-and-annotation | 112 |
ent.
>>> s3_client.head_bucket(Bucket="zenfiles"){'ResponseMetadata': {'RequestId': '62YRYW5XJ1VYPCJ0', 'HostId': 'YNBXcGUMSOh90AsTgPW6/Ra89mqzfN/arQq/FMcJzYCK98cFx53+9LLfAKzZaLhwaiJTm+s3mnU=', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': 'YNBXcGUMSOh90AsTgPW6/Ra89mqzfN/arQq/FMcJzYCK98cFx53+9LLfAKzZaLhwaiJTm+s3mnU=', 'x-amz-request-id': '62YRYW5XJ1VYPCJ0', 'date': 'Fri, 16 Jun 2023 11:04:20 GMT', 'x-amz-bucket-region': 'us-east-1', 'x-amz-access-point-alias': 'false', 'content-type': 'application/xml', 'server': 'AmazonS3'}, 'RetryAttempts': 0}}
>>>
>>> # Try to access another S3 bucket that the original AWS long-lived credentials can access.
>>> # An error will be thrown indicating that the bucket is not accessible.
>>> s3_client.head_bucket(Bucket="zenml-demos")
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ <stdin>:1 in <module> │
│ │
│ /home/stefan/aspyre/src/zenml/.venv/lib/python3.8/site-packages/botocore/client.py:508 in │
│ _api_call │
│ │
│ 505 │ │ │ │ │ f"{py_operation_name}() only accepts keyword arguments." │
│ 506 │ │ │ │ ) │
│ 507 │ │ │ # The "self" in this scope is referring to the BaseClient. │
│ ❱ 508 │ │ │ return self._make_api_call(operation_name, kwargs) │
│ 509 │ │ │
│ 510 │ │ _api_call.__name__ = str(py_operation_name) │ | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 525 |
┃┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ ID │ e316bcb3-6659-467b-81e5-5ec25bfd36b0 ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ NAME │ aws-sts-token ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ TYPE │ 🔶 aws ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ AUTH METHOD │ sts-token ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ RESOURCE TYPES │ 🔶 aws-generic, 📦 s3-bucket, 🌀 kubernetes-cluster, 🐳 docker-registry ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ RESOURCE NAME │ <multiple> ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ SECRET ID │ 971318c9-8db9-4297-967d-80cda070a121 ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ SESSION DURATION │ N/A ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ EXPIRES IN │ 11h58m17s ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ OWNER │ default ┃ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices | 391 |
Develop a custom artifact store
Learning how to develop a custom artifact store.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
ZenML comes equipped with Artifact Store implementations that you can use to store artifacts on a local filesystem or in the managed AWS, GCP, or Azure cloud object storage services. However, if you need to use a different type of object storage service as a backend for your ZenML Artifact Store, you can extend ZenML to provide your own custom Artifact Store implementation.
Base Abstraction
The Artifact Store establishes one of the main components in every ZenML stack. Now, let us take a deeper dive into the fundamentals behind its abstraction, namely the BaseArtifactStore class:
As ZenML only supports filesystem-based artifact stores, it features a configuration parameter called path, which will indicate the root path of the artifact store. When registering an artifact store, users will have to define this parameter.
Moreover, there is another variable in the config class called SUPPORTED_SCHEMES. This is a class variable that needs to be defined in every subclass of the base artifact store configuration. It indicates the supported file path schemes for the corresponding implementation. For instance, for the Azure artifact store, this set will be defined as {"abfs://", "az://"}.
Lastly, the base class features a set of abstractmethods: open, copyfile,exists,glob,isdir,listdir ,makedirs,mkdir,remove, rename,rmtree,stat,walk. In the implementation of every ArtifactStore flavor, it is required to define these methods with respect to the flavor at hand.
Putting all these considerations together, we end up with the following implementation:
from zenml.enums import StackComponentType | stack-components | https://docs.zenml.io/stack-components/artifact-stores/custom | 376 |
arameters
from zenml import step, pipeline
@stepdef my_custom_block_step(block_message) -> List[Dict]:
my_custom_block = [
"type": "header",
"text": {
"type": "plain_text",
"text": f":tada: {block_message}",
"emoji": true
return SlackAlerterParameters(blocks = my_custom_block)
@pipeline
def my_pipeline(...):
...
message_blocks = my_custom_block_step("my custom block!")
post_message = slack_alerter_post_step(params = message_blocks)
return post_message
if __name__ == "__main__":
my_pipeline()
For more information and a full list of configurable attributes of the Slack alerter, check out the SDK Docs .
PreviousDiscord Alerter
NextDevelop a Custom Alerter
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/alerters/slack | 169 |
36a885: Pull complete
c9c0554c8e6a: Pull completebacdcd847a66: Pull complete
482033770844: Pull complete
Digest: sha256:bf2cc3895e70dfa1ee1cd90bbfa599fa4cd8df837e27184bac1ce1cc239ecd3f
Status: Downloaded newer image for 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml-server:latest
715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml-server:latest
It is also possible to update the local AWS CLI configuration with credentials extracted from the AWS Service Connector:
zenml service-connector login aws-session-token --resource-type aws-generic
Example Command Output
Configured local AWS SDK profile 'zenml-c0f8e857'.
The 'aws-session-token' AWS Service Connector connector was used to successfully configure the local Generic AWS resource client/SDK.
A new profile is created in the local AWS CLI configuration holding the credentials. It can be used to access AWS resources and services, e.g.:
aws --profile zenml-c0f8e857 s3 ls
Stack Components use
The S3 Artifact Store Stack Component can be connected to a remote AWS S3 bucket through an AWS Service Connector.
The AWS Service Connector can also be used with any Orchestrator or Model Deployer stack component flavor that relies on Kubernetes clusters to manage workloads. This allows EKS Kubernetes container workloads to be managed without the need to configure and maintain explicit AWS or Kubernetes kubectl configuration contexts and credentials in the target environment and in the Stack Component.
Similarly, Container Registry Stack Components can be connected to an ECR Container Registry through an AWS Service Connector. This allows container images to be built and published to ECR container registries without the need to configure explicit AWS credentials in the target environment or the Stack Component.
End-to-end examples | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 404 |
DB services support this, CloudSQL is an example).Collect information from your secrets management service
Using an externally managed secrets management service like those offered by Google Cloud, AWS, Azure or HashiCorp Vault is optional, but is recommended if you are already using those cloud service providers. If omitted, ZenML will default to using the SQL database to store secrets.
If you decide to use an external secrets management service, you will need to collect and prepare the following information for the Helm chart configuration (for supported back-ends only):
For the AWS secrets manager:
the AWS region that you want to use to store your secrets
an AWS access key ID and secret access key that provides full access to the AWS secrets manager service. You can create a dedicated IAM user for this purpose, or use an existing user with the necessary permissions. If you deploy the ZenML server in an EKS Kubernetes cluster that is already configured to use implicit authorization with an IAM role for service accounts, you can omit this step.
For the Google Cloud secrets manager:
the Google Cloud project ID that you want to use to store your secrets
a Google Cloud service account that has access to the secrets manager service. You can create a dedicated service account for this purpose, or use an existing service account with the necessary permissions.
For the Azure Key Vault:
the name of the Azure Key Vault that you want to use to store your secrets
the Azure tenant ID, client ID, and client secret associated with the Azure service principal that will be used to access the Azure Key Vault. You can create a dedicated application service principal for this purpose, or use an existing service principal with the necessary permissions. If you deploy the ZenML server in an AKS Kubernetes cluster that is already configured to use implicit authorization through the Azure-managed identity service, you can omit this step.
For the HashiCorp Vault:
the URL of the HashiCorp Vault server | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm | 385 |
pe-specific metadata and visualizations.
MetadataAll output artifacts saved through ZenML will automatically have certain datatype-specific metadata saved with them. NumPy Arrays, for instance, always have their storage size, shape, dtype, and some statistical properties saved with them. You can access such metadata via the run_metadata attribute of an output, e.g.:
output_metadata = output.run_metadata
storage_size_in_bytes = output_metadata["storage_size"].value
We will talk more about metadata in the next section.
Visualizations
ZenML automatically saves visualizations for many common data types. Using the visualize() method you can programmatically show these visualizations in Jupyter notebooks:
output.visualize()
If you're not in a Jupyter notebook, you can simply view the visualizations in the ZenML dashboard by running zenml up and clicking on the respective artifact in the pipeline run DAG instead. Check out the artifact visualization page to learn more about how to build and view artifact visualizations in ZenML!
Fetching information during run execution
While most of this document has focused on fetching objects after a pipeline run has been completed, the same logic can also be used within the context of a running pipeline.
This is often desirable in cases where a pipeline is running continuously over time and decisions have to be made according to older runs.
For example, this is how we can fetch the last pipeline run of the same pipeline from within a ZenML step:
from zenml import get_step_context
from zenml.client import Client
@step
def my_step():
# Get the name of the current pipeline run
current_run_name = get_step_context().pipeline_run.name
# Fetch the current pipeline run
current_run = Client().get_pipeline_run(current_run_name)
# Fetch the previous run of the same pipeline
previous_run = current_run.pipeline.runs[1] # index 0 is the current run | how-to | https://docs.zenml.io/how-to/build-pipelines/fetching-pipelines | 380 |
━━━━━┛
Configuration
┏━━━━━━━━━━━━┯━━━━━━━━━━━━┓┃ PROPERTY │ VALUE ┃
┠────────────┼────────────┨
┃ project_id │ zenml-core ┃
┠────────────┼────────────┨
┃ token │ [HIDDEN] ┃
┗━━━━━━━━━━━━┷━━━━━━━━━━━━┛
Note the temporary nature of the Service Connector. It will expire and become unusable in 1 hour:
zenml service-connector list --name gcp-oauth2-token
Example Command Output
┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓
┃ ACTIVE │ NAME │ ID │ TYPE │ RESOURCE TYPES │ RESOURCE NAME │ SHARED │ OWNER │ EXPIRES IN │ LABELS ┃
┠────────┼──────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨
┃ │ gcp-oauth2-token │ ec4d7d85-c71c-476b-aa76-95bf772c90da │ 🔵 gcp │ 🔵 gcp-generic │ <multiple> │ ➖ │ default │ 59m35s │ ┃
┃ │ │ │ │ 📦 gcs-bucket │ │ │ │ │ ┃
┃ │ │ │ │ 🌀 kubernetes-cluster │ │ │ │ │ ┃
┃ │ │ │ │ 🐳 docker-registry │ │ │ │ │ ┃
┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛
Auto-configuration
The GCP Service Connector allows auto-discovering and fetching credentials and configuration set up by the GCP CLI on your local host. | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 609 |
────────┼────────┼─────────┼────────────┼────────┨┃ │ gcr-zenml-core │ 9fddfaba-6d46-4806-ad96-9dcabef74639 │ 🔵 gcp │ 🐳 docker-registry │ gcr.io/zenml-core │ ➖ │ default │ │ ┃
┠────────┼──────────────────────────────┼──────────────────────────────────────┼────────┼────────────────────┼──────────────────────┼────────┼─────────┼────────────┼────────┨
┃ │ vertex-ai-zenml-core │ f97671b9-8c73-412b-bf5e-4b7c48596f5f │ 🔵 gcp │ 🔵 gcp-generic │ zenml-core │ ➖ │ default │ │ ┃
┠────────┼──────────────────────────────┼──────────────────────────────────────┼────────┼────────────────────┼──────────────────────┼────────┼─────────┼────────────┼────────┨
┃ │ gcp-cloud-builder-zenml-core │ 648c1016-76e4-4498-8de7-808fd20f057b │ 🔵 gcp │ 🔵 gcp-generic │ zenml-core │ ➖ │ default │ │ ┃
┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛
```
register and connect a GCS Artifact Store Stack Component to the GCS bucket:Copyzenml artifact-store register gcs-zenml-bucket-sl --flavor gcp --path=gs://zenml-bucket-sl
Example Command Output
```text
Running with active workspace: 'default' (global)
Running with active stack: 'default' (global)
Successfully registered artifact_store `gcs-zenml-bucket-sl`.
```
```sh
zenml artifact-store connect gcs-zenml-bucket-sl --connector gcs-zenml-bucket-sl
```
Example Command Output
```text
Running with active workspace: 'default' (global)
Running with active stack: 'default' (global)
Successfully connected artifact store `gcs-zenml-bucket-sl` to the following resources:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓
┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 687 |
Local Orchestrator
Orchestrating your pipelines to run locally.
The local orchestrator is an orchestrator flavor that comes built-in with ZenML and runs your pipelines locally.
When to use it
The local orchestrator is part of your default stack when you're first getting started with ZenML. Due to it running locally on your machine, it requires no additional setup and is easy to use and debug.
You should use the local orchestrator if:
you're just getting started with ZenML and want to run pipelines without setting up any cloud infrastructure.
you're writing a new pipeline and want to experiment and debug quickly
How to deploy it
The local orchestrator comes with ZenML and works without any additional setup.
How to use it
To use the local orchestrator, we can register it and use it in our active stack:
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=local
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
You can now run any ZenML pipeline using the local orchestrator:
python file_that_runs_a_zenml_pipeline.py
For more information and a full list of configurable attributes of the local orchestrator, check out the SDK Docs .
PreviousOrchestrators
NextLocal Docker Orchestrator
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/local | 288 |
ke actions) can happen here
return profile.view()Visualizing whylogs Profiles
You can view visualizations of the whylogs profiles generated by your pipeline steps directly in the ZenML dashboard by clicking on the respective artifact in the pipeline run DAG.
Alternatively, if you are running inside a Jupyter notebook, you can load and render the whylogs profiles using the artifact.visualize() method, e.g.:
from zenml.client import Client
def visualize_statistics(
step_name: str, reference_step_name: Optional[str] = None
) -> None:
"""Helper function to visualize whylogs statistics from step artifacts.
Args:
step_name: step that generated and returned a whylogs profile
reference_step_name: an optional second step that generated a whylogs
profile to use for data drift visualization where two whylogs
profiles are required.
"""
pipe = Client().get_pipeline(pipeline="data_profiling_pipeline")
whylogs_step = pipe.last_run.steps[step_name]
whylogs_step.visualize()
if __name__ == "__main__":
visualize_statistics("data_loader")
visualize_statistics("train_data_profiler", "test_data_profiler")
PreviousEvidently
NextDevelop a custom data validator
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/data-validators/whylogs | 252 |
byproduct of implementing a high-security profile.Note that the gcloud local client can only be configured with credentials issued by the GCP Service Connector if the connector is configured with the GCP user account authentication method or the GCP service account authentication method and if the generate_temporary_tokens option is set to true in the Service Connector configuration.
Only the gcloud local application default credentials configuration will be updated by the GCP Service Connector configuration. This makes it possible to use libraries and SDKs that use the application default credentials to access GCP resources.
The following shows an example of configuring the local Kubernetes CLI to access a GKE cluster reachable through a GCP Service Connector:
zenml service-connector list --name gcp-user-account
Example Command Output
┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓
┃ ACTIVE │ NAME │ ID │ TYPE │ RESOURCE TYPES │ RESOURCE NAME │ SHARED │ OWNER │ EXPIRES IN │ LABELS ┃
┠────────┼──────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨
┃ │ gcp-user-account │ ddbce93f-df14-4861-a8a4-99a80972f3bc │ 🔵 gcp │ 🔵 gcp-generic │ <multiple> │ ➖ │ default │ │ ┃
┃ │ │ │ │ 📦 gcs-bucket │ │ │ │ │ ┃
┃ │ │ │ │ 🌀 kubernetes-cluster │ │ │ │ │ ┃
┃ │ │ │ │ 🐳 docker-registry │ │ │ │ │ ┃ | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 484 |
thentication method to use (e.g. service-account).ZENML_SECRETS_STORE_AUTH_CONFIG: The GCP Service Connector configuration, in JSON format (e.g. {"project_id":"my-project","service_account_json":{ ... }}).
Note: The remaining configuration options are deprecated and may be removed in a future release. Instead, you should set the ZENML_SECRETS_STORE_AUTH_METHOD and ZENML_SECRETS_STORE_AUTH_CONFIG variables to use the GCP Service Connector authentication method.
ZENML_SECRETS_STORE_PROJECT_ID: The GCP project ID to use. This must be set to the project ID where the GCP Secrets Manager service that you want to use is located.
GOOGLE_APPLICATION_CREDENTIALS: The path to the GCP service account credentials file to use for authentication. This must be set to a valid GCP service account credentials file that has access to the GCP Secrets Manager service that you want to use. If you are using a GCP service account attached to a GKE cluster to authenticate, you can omit this variable. NOTE: the path to the credentials file must be mounted into the container.
These configuration options are only relevant if you're using Azure Key Vault as the secrets store backend.
ZENML_SECRETS_STORE_TYPE: Set this to azure in order to set this type of secret store.
ZENML_SECRETS_STORE_KEY_VAULT_NAME: The name of the Azure Key Vault. This must be set to point to the Azure Key Vault instance that you want to use.
The Azure Secrets Store uses the ZenML Azure Service Connector under the hood to authenticate with the Azure Key Vault API. This means that you can use any of the authentication methods supported by the Azure Service Connector to authenticate with the Azure Key Vault API. The following configuration options are supported:
ZENML_SECRETS_STORE_AUTH_METHOD: The Azure Service Connector authentication method to use (e.g. service-account).
ZENML_SECRETS_STORE_AUTH_CONFIG: The Azure Service Connector configuration, in JSON format (e.g. {"tenant_id":"my-tenant-id","client_id":"my-client-id","client_secret": "my-client-secret"}). | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 445 |
e different components of the Vertex orchestrator:the ZenML client environment is the environment where you run the ZenML code responsible for building the pipeline Docker image and submitting the pipeline to Vertex AI, among other things. This is usually your local machine or some other environment used to automate running pipelines, like a CI/CD job. This environment needs to be able to authenticate with GCP and needs to have the necessary permissions to create a job in Vertex Pipelines, (e.g. the Vertex AI User role). If you are planning to run pipelines on a schedule, the ZenML client environment also needs additional permissions:the Storage Object Creator Role to be able to write the pipeline JSON file to the artifact store directly (NOTE: not needed if the Artifact Store is configured with credentials or is linked to Service Connector)
the Vertex AI pipeline environment is the GCP environment in which the pipeline steps themselves are running in GCP. The Vertex AI pipeline runs in the context of a GCP service account which we'll call here the workload service account. The workload service account can be explicitly configured in the orchestrator configuration via the workload_service_account parameter. If it is omitted, the orchestrator will use the Compute Engine default service account for the GCP project in which the pipeline is running. This service account needs to have the following permissions:permissions to run a Vertex AI pipeline, (e.g. the Vertex AI Service Agent role).
As you can see, there can be dedicated service accounts involved in running a Vertex AI pipeline. That's two service accounts if you also use a service account to authenticate to GCP in the ZenML client environment. However, you can keep it simple and use the same service account everywhere.
Configuration use-case: local gcloud CLI with user account | stack-components | https://docs.zenml.io/stack-components/orchestrators/vertex | 355 |
strator supports specifying resources in what way.If you're using an orchestrator which does not support this feature or its underlying infrastructure does not cover your requirements, you can also take a look at step operators which allow you to execute individual steps of ../...your pipeline in environments independent of your orchestrator.
Ensure your container is CUDA-enabled
To run steps or pipelines on GPUs, it's crucial to have the necessary CUDA tools installed in the environment. This section will guide you on how to configure your environment to utilize GPU capabilities effectively.
Note that these configuration changes are required for the GPU hardware to be properly utilized. If you don't update the settings, your steps might run, but they will not see any boost in performance from the custom hardware.
All steps running on GPU-backed hardware will be executed within a containerized environment, whether you're using the local Docker orchestrator or a cloud instance of Kubeflow. Therefore, you need to make two amendments to your Docker settings for the relevant steps:
1. Specify a CUDA-enabled parent image in your DockerSettings
For complete details, refer to the containerization page that explains how to do this. As an example, if you want to use the latest CUDA-enabled official PyTorch image for your entire pipeline run, you can include the following code:
from zenml import pipeline
from zenml.config import DockerSettings
docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime")
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
For TensorFlow, you might use the tensorflow/tensorflow:latest-gpu image, as detailed in the official TensorFlow documentation or their DockerHub overview.
2. Add ZenML as an explicit pip requirement | how-to | https://docs.zenml.io/how-to/training-with-gpus | 359 |
ister <STACK_NAME> -a <AZURE_STORE_NAME> ... --setWhen you register the Azure Artifact Store, you can create a ZenML Secret to store a variety of Azure credentials and then reference it in the Artifact Store configuration:
to use an Azure storage account key , set account_name to your account name and one of account_key or sas_token to the Azure key or SAS token value as attributes in the ZenML secret
to use an Azure storage account key connection string , configure the connection_string attribute in the ZenML secret to your Azure Storage Key connection string
to use Azure Service Principal credentials , create an Azure Service Principal and then set account_name to your account name and client_id, client_secret and tenant_id to the client ID, secret and tenant ID of your service principal in the ZenML secret
This method has some advantages over the implicit authentication method:
you don't need to install and configure the Azure CLI on your host
you don't need to care about enabling your other stack components (orchestrators, step operators and model deployers) to have access to the artifact store through Azure Managed Identities
you can combine the Azure artifact store with other stack components that are not running in Azure
Configuring Azure credentials in a ZenML secret and then referencing them in the Artifact Store configuration could look like this:
# Store the Azure storage account key in a ZenML secret
zenml secret create az_secret \
--account_name='<YOUR_AZURE_ACCOUNT_NAME>' \
--account_key='<YOUR_AZURE_ACCOUNT_KEY>'
# or if you want to use a connection string
zenml secret create az_secret \
--connection_string='<YOUR_AZURE_CONNECTION_STRING>'
# or if you want to use Azure ServicePrincipal credentials
zenml secret create az_secret \
--account_name='<YOUR_AZURE_ACCOUNT_NAME>' \
--tenant_id='<YOUR_AZURE_TENANT_ID>' \
--client_id='<YOUR_AZURE_CLIENT_ID>' \
--client_secret='<YOUR_AZURE_CLIENT_SECRET>' | stack-components | https://docs.zenml.io/stack-components/artifact-stores/azure | 411 |
ask that requires a lot of effort and maintenance.Stack Components don't implement any kind of verification regarding the validity and permission of configured credentials. If the credentials are invalid or if they lack the proper permissions to access the remote resource or service, you will only find this out later, when running a pipeline will fail at runtime.
ultimately, given that different Stack Component flavors rely on the same type of resource or cloud provider, it is not good design to duplicate the logic that handles authentication and authorization in each Stack Component implementation.
These drawbacks are addressed by Service Connectors.
Without Service Connectors, credentials are stored directly in the Stack Component configuration or ZenML Secret and are directly used in the runtime environment. The Stack Component implementation is directly responsible for validating credentials, authenticating and connecting to the infrastructure service. This is illustrated in the following diagram:
When Service Connectors are involved in the authentication and authorization process, they can act as brokers. The credentials validation and authentication process takes place on the ZenML server. In most cases, the main credentials never have to leave the ZenML server as the Service Connector automatically converts them into short-lived credentials with a reduced set of privileges and issues these credentials to clients. Furthermore, multiple Stack Components of different flavors can use the same Service Connector to access different types or resources with the same credentials: | how-to | https://docs.zenml.io/how-to/auth-management | 265 |
Security best practices
Best practices concerning the various authentication methods implemented by Service Connectors.
Service Connector Types, especially those targeted at cloud providers, offer a plethora of authentication methods matching those supported by remote cloud platforms. While there is no single authentication standard that unifies this process, there are some patterns that are easily identifiable and can be used as guidelines when deciding which authentication method to use to configure a Service Connector.
This section explores some of those patterns and gives some advice regarding which authentication methods are best suited for your needs.
This section may require some general knowledge about authentication and authorization to be properly understood. We tried to keep it simple and limit ourselves to talking about high-level concepts, but some areas may get a bit too technical.
Username and password
The key takeaway is this: you should avoid using your primary account password as authentication credentials as much as possible. If there are alternative authentication methods that you can use or other types of credentials (e.g. session tokens, API keys, API tokens), you should always try to use those instead.
Ultimately, if you have no choice, be cognizant of the third parties you share your passwords with. If possible, they should never leave the premises of your local host or development environment.
This is the typical authentication method that uses a username or account name plus the associated password. While this is the de facto method used to log in with web consoles and local CLIs, this is the least secure of all authentication methods and never something you want to share with other members of your team or organization or use to authenticate automated workloads. | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 317 |
Kubernetes Service Connector
Configuring Kubernetes Service Connectors to connect ZenML to Kubernetes clusters.
The ZenML Kubernetes service connector facilitates authenticating and connecting to a Kubernetes cluster. The connector can be used to access to any generic Kubernetes cluster by providing pre-authenticated Kubernetes python clients to Stack Components that are linked to it and also allows configuring the local Kubernetes CLI (i.e. kubectl).
Prerequisites
The Kubernetes Service Connector is part of the Kubernetes ZenML integration. You can either install the entire integration or use a pypi extra to install it independently of the integration:
pip install "zenml[connectors-kubernetes]" installs only prerequisites for the Kubernetes Service Connector Type
zenml integration install kubernetes installs the entire Kubernetes ZenML integration
A local Kubernetes CLI (i.e. kubectl ) and setting up local kubectl configuration contexts is not required to access Kubernetes clusters in your Stack Components through the Kubernetes Service Connector.
$ zenml service-connector list-types --type kubernetes
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓
┃ NAME │ TYPE │ RESOURCE TYPES │ AUTH METHODS │ LOCAL │ REMOTE ┃
┠──────────────────────────────┼───────────────┼───────────────────────┼──────────────┼───────┼────────┨
┃ Kubernetes Service Connector │ 🌀 kubernetes │ 🌀 kubernetes-cluster │ password │ ✅ │ ✅ ┃
┃ │ │ │ token │ │ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛
Resource Types
The Kubernetes Service Connector only supports authenticating to and granting access to a generic Kubernetes cluster. This type of resource is identified by the kubernetes-cluster Resource Type. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/kubernetes-service-connector | 470 |
sourceSettings(...)})
def my_step() -> None:
...Deprecating the requirements and required_integrations parameters
Users used to be able to pass requirements and required_integrations directly in the @pipeline decorator, but now need to pass them through settings:
How to migrate: Simply remove the parameters and use the DockerSettings instead
from zenml.config import DockerSettings
@step(settings={"docker": DockerSettings(requirements=[...], requirements_integrations=[...])})
def my_step() -> None:
...
Read more here.
A new pipeline intermediate representation
All the aforementioned configurations as well as additional information required to run a ZenML pipelines are now combined into an intermediate representation called PipelineDeployment. Instead of the user-facing BaseStep and BasePipeline classes, all the ZenML orchestrators and step operators now use this intermediate representation to run pipelines and steps.
How to migrate: If you have written a custom orchestrator or step operator, then you should see the new base abstractions (seen in the links). You can adjust your stack component implementations accordingly.
PipelineSpec now uniquely defines pipelines
Once a pipeline has been executed, it is represented by a PipelineSpec that uniquely identifies it. Therefore, users are no longer able to edit a pipeline once it has been run once. There are now three options to get around this:
Pipeline runs can be created without being associated with a pipeline explicitly: We call these unlisted runs. Read more about unlisted runs here.
Pipelines can be deleted and created again.
Pipelines can be given unique names each time they are run to uniquely identify them.
How to migrate: No code changes, but rather keep in mind the behavior (e.g. in a notebook setting) when quickly iterating over pipelines as experiments.
New post-execution workflow
The Post-execution workflow has changed as follows: | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-twenty | 370 |
other remote stack components also running in GCP.This method uses the implicit GCP authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure a GCS Artifact Store. You don't need to supply credentials explicitly when you register the GCS Artifact Store, as it leverages the local credentials and configuration that the Google Cloud CLI stores on your local machine. However, you will need to install and set up the Google Cloud CLI on your machine as a prerequisite, as covered in the Google Cloud documentation , before you register the GCS Artifact Store.
Certain dashboard functionality, such as visualizing or deleting artifacts, is not available when using an implicitly authenticated artifact store together with a deployed ZenML server because the ZenML server will not have permission to access the filesystem.
The implicit authentication method also needs to be coordinated with other stack components that are highly dependent on the Artifact Store and need to interact with it directly to the function. If these components are not running on your machine, they do not have access to the local Google Cloud CLI configuration and will encounter authentication failures while trying to access the GCS Artifact Store:
Orchestrators need to access the Artifact Store to manage pipeline artifacts
Step Operators need to access the Artifact Store to manage step-level artifacts
Model Deployers need to access the Artifact Store to load served models
To enable these use cases, it is recommended to use a GCP Service Connector to link your GCS Artifact Store to the remote GCS bucket.
To set up the GCS Artifact Store to authenticate to GCP and access a GCS bucket, it is recommended to leverage the many features provided by the GCP Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components. | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/gcp | 366 |
racking import MlflowClient, artifact_utils
@stepdef deploy_model() -> Optional[MLFlowDeploymentService]:
# Deploy a model using the MLflow Model Deployer
zenml_client = Client()
model_deployer = zenml_client.active_stack.model_deployer
experiment_tracker = zenml_client.active_stack.experiment_tracker
# Let's get the run id of the current pipeline
mlflow_run_id = experiment_tracker.get_run_id(
experiment_name=get_step_context().pipeline_name,
run_name=get_step_context().run_name,
# Once we have the run id, we can get the model URI using mlflow client
experiment_tracker.configure_mlflow()
client = MlflowClient()
model_name = "model" # set the model name that was logged
model_uri = artifact_utils.get_artifact_uri(
run_id=mlflow_run_id, artifact_path=model_name
mlflow_deployment_config = MLFlowDeploymentConfig(
name: str = "mlflow-model-deployment-example",
description: str = "An example of deploying a model using the MLflow Model Deployer",
pipeline_name: str = get_step_context().pipeline_name,
pipeline_step_name: str = get_step_context().step_name,
model_uri: str = model_uri,
model_name: str = model_name,
workers: int = 1,
mlserver: bool = False,
timeout: int = 300,
service = model_deployer.deploy_model(mlflow_deployment_config)
return service
Configuration
Within the MLFlowDeploymentService you can configure:
name: The name of the deployment.
description: The description of the deployment.
pipeline_name: The name of the pipeline that deployed the MLflow prediction server.
pipeline_step_name: The name of the step that deployed the MLflow prediction server.
model_name: The name of the model that is deployed in case of model registry the name must be a valid registered model name.
model_version: The version of the model that is deployed in case of model registry the version must be a valid registered model version. | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/mlflow | 412 |
principal, please consult the Azure documentation.This method uses the implicit Azure authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure an Azure Artifact Store. You don't need to supply credentials explicitly when you register the Azure Artifact Store, instead, you have to set one of the following sets of environment variables:
to use an Azure storage account key , set AZURE_STORAGE_ACCOUNT_NAME to your account name and one of AZURE_STORAGE_ACCOUNT_KEY or AZURE_STORAGE_SAS_TOKEN to the Azure key value.
to use an Azure storage account key connection string , set AZURE_STORAGE_CONNECTION_STRING to your Azure Storage Key connection string
to use Azure Service Principal credentials , create an Azure Service Principal and then set AZURE_STORAGE_ACCOUNT_NAME to your account name and AZURE_STORAGE_CLIENT_ID , AZURE_STORAGE_CLIENT_SECRET and AZURE_STORAGE_TENANT_ID to the client ID, secret and tenant ID of your service principal
Certain dashboard functionality, such as visualizing or deleting artifacts, is not available when using an implicitly authenticated artifact store together with a deployed ZenML server because the ZenML server will not have permission to access the filesystem.
The implicit authentication method also needs to be coordinated with other stack components that are highly dependent on the Artifact Store and need to interact with it directly to the function. If these components are not running on your machine, they do not have access to the local environment variables and will encounter authentication failures while trying to access the Azure Artifact Store:
Orchestrators need to access the Artifact Store to manage pipeline artifacts
Step Operators need to access the Artifact Store to manage step-level artifacts
Model Deployers need to access the Artifact Store to load served models | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/azure | 346 |
register the Azure Container Registry as follows:# Register the Azure container registry and reference the target ACR registry URI
zenml container-registry register <CONTAINER_REGISTRY_NAME> -f azure \
--uri=<REGISTRY_URL>
# Connect the Azure container registry to the target ACR registry via an Azure Service Connector
zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i
A non-interactive version that connects the Azure Container Registry to a target ACR registry through an Azure Service Connector:
zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml container-registry connect azure-demo --connector azure-demo
Successfully connected container registry `azure-demo` to the following resources:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃
┠──────────────────────────────────────┼────────────────┼────────────────┼────────────────────┼───────────────────────────────────────┨
┃ db5821d0-a658-4504-ae96-04c3302d8f85 │ azure-demo │ 🇦 azure │ 🐳 docker-registry │ demozenmlcontainerregistry.azurecr.io ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
As a final step, you can use the Azure Container Registry in a ZenML Stack:
# Register and set a stack with the new container registry
zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set
Linking the Azure Container Registry to a Service Connector means that your local Docker client is no longer authenticated to access the remote registry. If you need to manually interact with the remote registry via the Docker CLI, you can use the local login Service Connector feature to temporarily authenticate your local Docker client to the remote registry: | stack-components | https://docs.zenml.io/stack-components/container-registries/azure | 532 |
from zenml.client import Client
client = Client()step = client.get_pipeline_run().steps["step_name"]
print(step.run_metadata["metadata_key"].value)
PreviousAttach metadata to an artifact
NextGroup metadata
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/track-metrics-metadata/attach-metadata-to-steps | 50 |
rue
# The username and password for the database.database_username: user
database_password:
# The URL of the database to use for the ZenML server.
database_url:
# The path to the SSL CA certificate to use for the database connection.
database_ssl_ca:
# The path to the client SSL certificate to use for the database connection.
database_ssl_cert:
# The path to the client SSL key to use for the database connection.
database_ssl_key:
# Whether to verify the database server SSL certificate.
database_ssl_verify_server_cert: true
# The log level to set the terraform client. Choose one of TRACE,
# DEBUG, INFO, WARN, or ERROR (case insensitive).
log_level: ERROR
Feel free to include only those variables that you want to customize, in your file. For all other variables, the default values (shown above) will be used.
Cloud-specific settings
# The AWS region to deploy to.
region: eu-west-1
# The name of the RDS instance to create
rds_name: zenmlserver
# Name of RDS database to create.
db_name: zenmlserver
# Type of RDS database to create.
db_type: mysql
# Version of RDS database to create.
db_version: 5.7.38
# Instance class of RDS database to create.
db_instance_class: db.t3.micro
# Allocated storage of RDS database to create.
db_allocated_storage: 5
The database_username and database_password from the general config is used to set those variables for the AWS RDS instance.
# The project in GCP to deploy the server in.
project_id:
# The GCP region to deploy to.
region: europe-west3
# The name of the CloudSQL instance to create.
cloudsql_name: zenmlserver
# Name of CloudSQL database to create.
db_name: zenmlserver
# Instance class of CloudSQL database to create.
db_instance_tier: db-n1-standard-1
# Allocated storage of CloudSQL database, in GB, to create.
db_disk_size: 10
# Whether or not to enable the Secrets Manager API. Disable this if you
# don't have ListServices permissions on the project.
enable_secrets_manager_api: true
The project_id is required to be set. | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-zenml-cli | 478 |
📓Model Registries
Tracking and managing ML models.
Model registries are centralized storage solutions for managing and tracking machine learning models across various stages of development and deployment. They help track the different versions and configurations of each model and enable reproducibility. By storing metadata such as version, configuration, and metrics, model registries help streamline the management of trained models. In ZenML, model registries are Stack Components that allow for the easy retrieval, loading, and deployment of trained models. They also provide information on the pipeline in which the model was trained and how to reproduce it.
Model Registry Concepts and Terminology
ZenML provides a unified abstraction for model registries through which it is possible to handle and manage the concepts of model groups, versions, and stages in a consistent manner regardless of the underlying registry tool or platform being used. The following concepts are useful to be aware of for this abstraction:
RegisteredModel: A logical grouping of models that can be used to track different versions of a model. It holds information about the model, such as its name, description, and tags, and can be created by the user or automatically created by the model registry when a new model is logged.
RegistryModelVersion: A specific version of a model identified by a unique version number or string. It holds information about the model, such as its name, description, tags, and metrics, and a reference to the model artifact logged to the model registry. In ZenML, it also holds a reference to the pipeline name, pipeline run ID, and step name. Each model version is associated with a model registration. | stack-components | https://docs.zenml.io/v/docs/stack-components/model-registries | 325 |
n this mechanism and initialize zenml at the root.Afterward, you should see the new flavor in the list of available flavors:
zenml orchestrator flavor list
It is important to draw attention to when and how these base abstractions are coming into play in a ZenML workflow.
The CustomOrchestratorFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The CustomOrchestratorConfig class is imported when someone tries to register/update a stack component with this custom flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config object are inherently pydantic objects, you can also add your own custom validators here.
The CustomOrchestrator only comes into play when the component is ultimately in use.
The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the CustomOrchestratorFlavor and the CustomOrchestratorConfig are implemented in a different module/path than the actual CustomOrchestrator).
Implementation guide
Create your orchestrator class: This class should either inherit from BaseOrchestrator, or more commonly from ContainerizedOrchestrator. If your orchestrator uses container images to run code, you should inherit from ContainerizedOrchestrator which handles building all Docker images for the pipeline to be executed. If your orchestator does not use container images, you'll be responsible that the execution environment contains all the necessary requirements and code files to run the pipeline. | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/custom | 339 |
authentication.
ED25519 key based authentication.SSH private keys configured in the connector will be distributed to all clients that use them to run pipelines with the HyperAI orchestrator. SSH keys are long-lived credentials that give unrestricted access to HyperAI instances.
When configuring the Service Connector, it is required to provide at least one hostname via hostnames and the username with which to login. Optionally, it is possible to provide an ssh_passphrase if applicable. This way, it is possible to use the HyperAI service connector in multiple ways:
Create one service connector per HyperAI instance with different SSH keys.
Configure a reused SSH key just once for multiple HyperAI instances, then select the individual instance when creating the HyperAI orchestrator component.
Auto-configuration
This Service Connector does not support auto-discovery and extraction of authentication credentials from HyperAI instances. If this feature is useful to you or your organization, please let us know by messaging us in Slack or creating an issue on GitHub.
Stack Components use
The HyperAI Service Connector can be used by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances.
PreviousAzure Service Connector
NextManage stacks
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/auth-management/hyperai-service-connector | 240 |
dvantages over the implicit authentication method:you don't need to install and configure the GCP CLI on your host
you don't need to care about enabling your other stack components (orchestrators, step operators and model deployers) to have access to the artifact store through GCP Service Accounts and Workload Identity
you can combine the GCS artifact store with other stack components that are not running in GCP
For this method, you need to create a user-managed GCP service account, grant it privileges to read and write to your GCS bucket (i.e. use the Storage Object Admin role) and then create a service account key.
With the service account key downloaded to a local file, you can register a ZenML secret and reference it in the GCS Artifact Store configuration as follows:
# Store the GCP credentials in a ZenML
zenml secret create gcp_secret \
--token=@path/to/service_account_key.json
# Register the GCS artifact store and reference the ZenML secret
zenml artifact-store register gcs_store -f gcp \
--path='gs://your-bucket' \
--authentication_secret=gcp_secret
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a gs_store ... --set
For more, up-to-date information on the GCS Artifact Store implementation and its configuration, you can have a look at the SDK docs .
How do you use it?
Aside from the fact that the artifacts are stored in GCP Cloud Storage, using the GCS Artifact Store is no different from using any other flavor of Artifact Store.
PreviousAmazon Simple Cloud Storage (S3)
NextAzure Blob Storage
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/artifact-stores/gcp | 350 |
er ┃
┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛Alternatively, you can configure a Service Connector through the ZenML dashboard:
Note: Please remember to grant the entity associated with your cloud credentials permissions to access the Kubernetes cluster and to list accessible Kubernetes clusters. For a full list of permissions required to use a AWS Service Connector to access one or more Kubernetes cluster, please refer to the documentation for your Service Connector of choice or read the documentation available in the interactive CLI commands and dashboard. The Service Connectors supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use-case.
If you already have one or more Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the Kubernetes cluster that you want to use for your Seldon Core Model Deployer by running e.g.:
zenml service-connector list-resources --resource-type kubernetes-cluster
Example Command Output
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃
┠──────────────────────────────────────┼────────────────┼────────────────┼───────────────────────┼───────────────────────────────────────────────┨
┃ bdf1dc76-e36b-4ab4-b5a6-5a9afea4822f │ eks-zenhacks │ 🔶 aws │ 🌀 kubernetes-cluster │ zenhacks-cluster ┃
┠──────────────────────────────────────┼────────────────┼────────────────┼───────────────────────┼───────────────────────────────────────────────┨ | stack-components | https://docs.zenml.io/stack-components/model-deployers/seldon | 467 |
p failure using the Alerter from the active stack.We use @step for success notification to only notify the user about a fully successful pipeline run and not about every successful step.
Inside the helper function build_message(), you will find an example on how developers can work with StepContext to form a proper notification:
from zenml import get_step_context
def build_message(status: str) -> str:
"""Builds a message to post.
Args:
status: Status to be set in text.
Returns:
str: Prepared message.
"""
step_context = get_step_context()
run_url = get_run_url(step_context.pipeline_run)
return (
f"Pipeline `{step_context.pipeline.name}` [{str(step_context.pipeline.id)}] {status}!\n"
f"Run `{step_context.pipeline_run.name}` [{str(step_context.pipeline_run.id)}]\n"
f"URL: {run_url}"
@step(enable_cache=False)
def notify_on_success() -> None:
"""Notifies user on pipeline success."""
step_context = get_step_context()
if alerter and step_context.pipeline_run.config.extra["notify_on_success"]:
alerter.post(message=build_message(status="succeeded"))
Linking to the Alerter Stack component
A common use case is to use the Alerter component inside the failure or success hooks to notify relevant people. It is quite easy to do this:
from zenml import get_step_context
from zenml.client import Client
def on_failure():
step_name = get_step_context().step_run.name
Client().active_stack.alerter.post(f"{step_name} just failed!")
ZenML provides standard failure and success hooks that use the alerter you have configured in your stack. Here's an example of how to use them in your pipelines:
from zenml.hooks import alerter_success_hook, alerter_failure_hook
@step(on_failure=alerter_failure_hook, on_success=alerter_success_hook)
def my_step(...):
...
To set up the local environment used below, follow the recommendations from the Project templates. | how-to | https://docs.zenml.io/how-to/build-pipelines/use-failure-success-hooks | 417 |
should pick the one that best fits your use case.If you already have one or more GCP Service Connectors configured in your ZenML deployment, you can check which of them can be used to access generic GCP resources like the GCP Image Builder required for your GCP Image Builder by running e.g.:
zenml service-connector list-resources --resource-type gcp-generic
Example Command Output
The following 'gcp-generic' resources can be accessed by service connectors configured in your workspace:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓
┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃
┠──────────────────────────────────────┼────────────────┼────────────────┼────────────────┼────────────────┨
┃ bfdb657d-d808-47e7-9974-9ba6e4919d83 │ gcp-generic │ 🔵 gcp │ 🔵 gcp-generic │ zenml-core ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛
After having set up or decided on a GCP Service Connector to use to authenticate to GCP, you can register the GCP Image Builder as follows:
zenml image-builder register <IMAGE_BUILDER_NAME> \
--flavor=gcp \
--cloud_builder_image=<BUILDER_IMAGE_NAME> \
--network=<DOCKER_NETWORK> \
--build_timeout=<BUILD_TIMEOUT_IN_SECONDS>
# Connect the GCP Image Builder to GCP via a GCP Service Connector
zenml image-builder connect <IMAGE_BUILDER_NAME> -i
A non-interactive version that connects the GCP Image Builder to a target GCP Service Connector:
zenml image-builder connect <IMAGE_BUILDER_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml image-builder connect gcp-image-builder --connector gcp-generic
Successfully connected image builder `gcp-image-builder` to the following resources:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ | stack-components | https://docs.zenml.io/stack-components/image-builders/gcp | 572 |
┃┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨
┃ 373a73c2-8295-45d4-a768-45f5a0f744ea │ aws-multi-type │ 🔶 aws │ 🔶 aws-generic │ us-east-1 ┃
┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨
┃ │ │ │ 📦 s3-bucket │ s3://aws-ia-mwaa-715803424590 ┃
┃ │ │ │ │ s3://zenfiles ┃
┃ │ │ │ │ s3://zenml-demos ┃
┃ │ │ │ │ s3://zenml-generative-chat ┃
┃ │ │ │ │ s3://zenml-public-datasets ┃
┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨ | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 309 |
(
EvidentlyColumnMapping,
evidently_report_step,text_data_report = evidently_report_step.with_options(
parameters=dict(
column_mapping=EvidentlyColumnMapping(
target="Rating",
numerical_features=["Age", "Positive_Feedback_Count"],
categorical_features=[
"Division_Name",
"Department_Name",
"Class_Name",
],
text_features=["Review_Text", "Title"],
),
metrics=[
EvidentlyMetricConfig.metric("DataQualityPreset"),
EvidentlyMetricConfig.metric(
"TextOverviewPreset", column_name="Review_Text"
),
EvidentlyMetricConfig.metric_generator(
"ColumnRegExpMetric",
columns=["Review_Text", "Title"],
reg_exp=r"[A-Z][A-Za-z0-9 ]*",
),
],
# We need to download the NLTK data for the TextOverviewPreset
download_nltk_data=True,
),
The configuration shown in the example is the equivalent of running the following Evidently code inside the step:
from evidently.metrics import ColumnRegExpMetric
from evidently.metric_preset import DataQualityPreset, TextOverviewPreset
from evidently import ColumnMapping
from evidently.report import Report
from evidently.metrics.base_metric import generate_column_metrics
import nltk
nltk.download("words")
nltk.download("wordnet")
nltk.download("omw-1.4")
column_mapping = ColumnMapping(
target="Rating",
numerical_features=["Age", "Positive_Feedback_Count"],
categorical_features=[
"Division_Name",
"Department_Name",
"Class_Name",
],
text_features=["Review_Text", "Title"],
report = Report(
metrics=[
DataQualityPreset(),
TextOverviewPreset(column_name="Review_Text"),
generate_column_metrics(
ColumnRegExpMetric,
columns=["Review_Text", "Title"],
parameters={"reg_exp": r"[A-Z][A-Za-z0-9 ]*"}
# The datasets are those that are passed to the Evidently step
# as input artifacts
report.run(
current_data=current_dataset,
reference_data=reference_dataset,
column_mapping=column_mapping,
Let's break this down... | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/evidently | 428 |
n be used.
Label Studio Annotator Stack ComponentOur Label Studio annotator component inherits from the BaseAnnotator class. There are some methods that are core methods that must be defined, like being able to register or get a dataset. Most annotators handle things like the storage of state and have their own custom features, so there are quite a few extra methods specific to Label Studio.
The core Label Studio functionality that's currently enabled includes a way to register your datasets, export any annotations for use in separate steps as well as start the annotator daemon process. (Label Studio requires a server to be running in order to use the web interface, and ZenML handles the provisioning of this server locally using the details you passed in when registering the component unless you've specified that you want to use a deployed instance.)
Standard Steps
ZenML offers some standard steps (and their associated config objects) which will get you up and running with the Label Studio integration quickly. These include:
LabelStudioDatasetRegistrationConfig - a step config object to be used when registering a dataset with Label studio using the get_or_create_dataset step
LabelStudioDatasetSyncConfig - a step config object to be used when registering a dataset with Label studio using the sync_new_data_to_label_studio step. Note that this requires a ZenML secret to have been pre-registered with your artifact store as being the one that holds authentication secrets specific to your particular cloud provider. (Label Studio provides some documentation on what permissions these secrets require here.)
get_or_create_dataset step - This takes a LabelStudioDatasetRegistrationConfig config object which includes the name of the dataset. If it exists, this step will return the name, but if it doesn't exist then ZenML will register the dataset along with the appropriate label config with Label Studio. | stack-components | https://docs.zenml.io/stack-components/annotators/label-studio | 361 |
pipeline, (e.g. the Vertex AI Service Agent role).A key is also needed for the "client" service account. You can create a key for this service account and download it to your local machine (e.g. in a connectors-vertex-ai-workload.json file).
With all the service accounts and the key ready, we can register the GCP Service Connector and Vertex AI orchestrator as follows:
zenml service-connector register <CONNECTOR_NAME> --type gcp --auth-method=service-account --project_id=<PROJECT_ID> --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic
zenml orchestrator register <ORCHESTRATOR_NAME> \
--flavor=vertex \
--location=<GCP_LOCATION> \
--synchronous=true \
--workload_service_account=<WORKLOAD_SERVICE_ACCOUNT_NAME>@<PROJECT_NAME>.iam.gserviceaccount.com
zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME>
Configuring the stack
With the orchestrator registered, we can use it in our active stack:
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Vertex AI. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
You can now run any ZenML pipeline using the Vertex orchestrator:
python file_that_runs_a_zenml_pipeline.py
Vertex UI
Vertex comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps.
For any runs executed on Vertex, you can get the URL to the Vertex UI in Python using the following code snippet:
from zenml.client import Client
pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>")
orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value
Run pipelines on a schedule | stack-components | https://docs.zenml.io/stack-components/orchestrators/vertex | 443 |
tes-cluster │ sts-token │ │ ┃┃ │ │ 🐳 docker-registry │ iam-role │ │ ┃
┃ │ │ │ session-token │ │ ┃
┃ │ │ │ federation-token │ │ ┃
┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨
┃ GCP Service Connector │ 🔵 gcp │ 🔵 gcp-generic │ implicit │ ✅ │ ✅ ┃
┃ │ │ 📦 gcs-bucket │ user-account │ │ ┃
┃ │ │ 🌀 kubernetes-cluster │ service-account │ │ ┃
┃ │ │ 🐳 docker-registry │ oauth2-token │ │ ┃
┃ │ │ │ impersonation │ │ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛
From the above, you can see that there are not one but four Service Connector Types that can connect ZenML to Kubernetes clusters. The first one is a generic implementation that can be used with any standard Kubernetes cluster, including those that run on-premise. The other three deal exclusively with Kubernetes services managed by the AWS, GCP and Azure cloud providers.
Conversely, to list all currently registered Service Connector instances that provide access to Kubernetes clusters, one might run:
zenml service-connector list --resource_type kubernetes-cluster
Example Command Output
┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━┓ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 527 |
_NAME
Install Tekton Pipelines onto your cluster.If one or more of the deployments are not in the Running state, try increasing the number of nodes in your cluster.
ZenML has only been tested with Tekton Pipelines >=0.38.3 and may not work with previous versions.
Infrastructure Deployment
A Tekton orchestrator can be deployed directly from the ZenML CLI:
zenml orchestrator deploy tekton_orchestrator --flavor=tekton --provider=<YOUR_PROVIDER> ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
How to use it
To use the Tekton orchestrator, we need:
The ZenML tekton integration installed. If you haven't done so, runCopyzenml integration install tekton -y
Docker installed and running.
Tekton pipelines deployed on a remote cluster. See the deployment section for more information.
The name of your Kubernetes context which points to your remote cluster. Run kubectl config get-contexts to see a list of available contexts.
A remote artifact store as part of your stack.
A remote container registry as part of your stack.
kubectl installed and the name of the Kubernetes configuration context which points to the target cluster (i.e. runkubectl config get-contexts to see a list of available contexts). This is optional (see below).
It is recommended that you set up a Service Connector and use it to connect ZenML Stack Components to the remote Kubernetes cluster, especially If you are using a Kubernetes cluster managed by a cloud provider like AWS, GCP or Azure, This guarantees that your Stack is fully portable on other environments and your pipelines are fully reproducible.
We can then register the orchestrator and use it in our active stack. This can be done in two ways: | stack-components | https://docs.zenml.io/stack-components/orchestrators/tekton | 395 |
───────┼────────────────────┼────────────────────┨┃ │ cloud_kubeflow_stack │ b94df4d2-5b65-4201-945a-61436c9c5384 │ │ default │ cloud_artifact_store │ cloud_orchestrator │ eks_seldon │ cloud_registry │ aws_secret_manager ┃
┠────────┼──────────────────────┼──────────────────────────────────────┼────────┼─────────┼──────────────────────┼───────────────────────┼────────────────┼────────────────────┼────────────────────┨
┃ │ local_kubeflow_stack │ 8d9343ac-d405-43bd-ab9c-85637e479efe │ │ default │ default │ kubeflow_orchestrator │ │ local_registry │ ┃
┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┛
The zenml profile migrate CLI command also provides command line flags for cases in which the user wants to overwrite existing components or stacks, or ignore errors.
Decoupling Stack Component configuration from implementation
Stack components can now be registered without having the required integrations installed. As part of this change, we split all existing stack component definitions into three classes: an implementation class that defines the logic of the stack component, a config class that defines the attributes and performs input validations, and a flavor class that links implementation and config classes together. See component flavor models #895 for more details.
If you are only using stack component flavors that are shipped with the zenml Python distribution, this change has no impact on the configuration of your existing stacks. However, if you are currently using custom stack component implementations, you will need to update them to the new format. See the documentation on writing custom stack component flavors for updated information on how to do this.
Shared ZenML Stacks and Stack Components | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-twenty | 507 |
and authorized to access the AWS platform or not.4. find out which S3 buckets, ECR registries, and EKS Kubernetes clusters we can gain access to. We'll use this information to configure the Stack Components in our minimal AWS stack: an S3 Artifact Store, a Kubernetes Orchestrator, and an ECR Container Registry.
```sh
zenml service-connector list-resources --resource-type s3-bucket
```
Example Command Output
```text
The following 's3-bucket' resources can be accessed by service connectors configured in your workspace:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃
┠──────────────────────────────────────┼─────────────────────┼────────────────┼───────────────┼───────────────────────────────────────┨
┃ bf073e06-28ce-4a4a-8100-32e7cb99dced │ aws-demo-multi │ 🔶 aws │ 📦 s3-bucket │ s3://zenfiles ┃
┃ │ │ │ │ s3://zenml-demos ┃
┃ │ │ │ │ s3://zenml-generative-chat ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
```
```sh
zenml service-connector list-resources --resource-type kubernetes-cluster
```
Example Command Output
```text
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━┓
┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 582 |
= client.get_pipeline("<PIPELINE_NAME>").last_runtrainer_step = last_run.get_step("<STEP_NAME>")
tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value
This will link to your deployed MLflow instance UI, or the local MLflow experiment file.
Additional Configuration
You can further configure the experiment tracker using MLFlowExperimentTrackerSettings:
from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings
mlflow_settings = MLFlowExperimentTrackerSettings(
nested=True,
tags={"key": "value"}
@step(
experiment_tracker="<MLFLOW_TRACKER_STACK_COMPONENT_NAME>",
settings={
"experiment_tracker.mlflow": mlflow_settings
For more details and advanced options, see the full MLflow Experiment Tracker documentation.
PreviousKubernetes
NextSkypilot
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/popular-integrations/mlflow | 175 |
stack components. You can read more about it here.For the deployment of ZenML, you have the option to either self-host it or register for a free account on ZenML Cloud.
PreviousIntroduction
NextCore concepts
Last updated 15 days ago | getting-started | https://docs.zenml.io/getting-started/installation | 50 |
byproduct of implementing a high-security profile.Note that the gcloud local client can only be configured with credentials issued by the GCP Service Connector if the connector is configured with the GCP user account authentication method or the GCP service account authentication method and if the generate_temporary_tokens option is set to true in the Service Connector configuration.
Only the gcloud local application default credentials configuration will be updated by the GCP Service Connector configuration. This makes it possible to use libraries and SDKs that use the application default credentials to access GCP resources.
The following shows an example of configuring the local Kubernetes CLI to access a GKE cluster reachable through a GCP Service Connector:
zenml service-connector list --name gcp-user-account
Example Command Output
┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓
┃ ACTIVE │ NAME │ ID │ TYPE │ RESOURCE TYPES │ RESOURCE NAME │ SHARED │ OWNER │ EXPIRES IN │ LABELS ┃
┠────────┼──────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨
┃ │ gcp-user-account │ ddbce93f-df14-4861-a8a4-99a80972f3bc │ 🔵 gcp │ 🔵 gcp-generic │ <multiple> │ ➖ │ default │ │ ┃
┃ │ │ │ │ 📦 gcs-bucket │ │ │ │ │ ┃
┃ │ │ │ │ 🌀 kubernetes-cluster │ │ │ │ │ ┃
┃ │ │ │ │ 🐳 docker-registry │ │ │ │ │ ┃ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 484 |
Evidently
How to keep your data quality in check and guard against data and model drift with Evidently profiling
The Evidently Data Validator flavor provided with the ZenML integration uses Evidently to perform data quality, data drift, model drift and model performance analyses, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
When would you want to use it?
Evidently is an open-source library that you can use to monitor and debug machine learning models by analyzing the data that they use through a powerful set of data profiling and visualization features, or to run a variety of data and model validation reports and tests, from data integrity tests that work with a single dataset to model evaluation tests to data drift analysis and model performance comparison tests. All this can be done with minimal configuration input from the user, or customized with specialized conditions that the validation tests should perform.
Evidently currently works with tabular data in pandas.DataFrame or CSV file formats and can handle both regression and classification tasks.
You should use the Evidently Data Validator when you need the following data and/or model validation features that are possible with Evidently:
Data Quality reports and tests: provides detailed feature statistics and a feature behavior overview for a single dataset. It can also compare any two datasets. E.g. you can use it to compare train and test data, reference and current data, or two subgroups of one dataset.
Data Drift reports and tests: helps detects and explore feature distribution changes in the input data by comparing two datasets with identical schema. | stack-components | https://docs.zenml.io/stack-components/data-validators/evidently | 336 |
idator).
PreviousWhylogs
NextExperiment TrackersLast updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/custom | 17 |
the case.
Well-known dependency resolution issuesSome of ZenML's integrations come with strict dependency and package version requirements. We try to keep these dependency requirements ranges as wide as possible for the integrations developed by ZenML, but it is not always possible to make this work completely smoothly. Here is one of the known issues:
click: ZenML currently requires click~=8.0.3 for its CLI. This is on account of another dependency of ZenML. Using versions of click in your own project that are greater than 8.0.3 may cause unanticipated behaviors.
Manually bypassing ZenML's integration installation
It is possible to skip ZenML's integration installation process and install dependencies manually. This is not recommended, but it is possible and can be run at your own risk.
Note that the zenml integration install ... command runs a pip install ... under the hood as part of its implementation, taking the dependencies listed in the integration object and installing them. For example, zenml integration install gcp will run pip install "kfp==1.8.16" "gcsfs" "google-cloud-secret-manager" ... and so on, since they are specified in the integration definition.
To do this, you will need to install the dependencies for the integration you want to use manually. You can find the dependencies for the integrations by running the following:
# to have the requirements exported to a file
zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME
# to have the requirements printed to the console
zenml integration export-requirements INTEGRATION_NAME
You can then amend and tweak those requirements as you see fit. Note that if you are using a remote orchestrator, you would then have to place the updated versions for the dependencies in a DockerSettings object (described in detail here) which will then make sure everything is working as you need.
PreviousConfigure Python environments
NextConfigure the server environment
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/configure-python-environments/handling-dependencies | 405 |
total_tests) * 100
return round(failure_rate, 2)When we run this as part of the evaluation pipeline, we get a 16% failure rate which again tells us that we're doing pretty well but that there is room for improvement. As a baseline, this is a good starting point. We can then iterate on the retrieval component to improve its performance.
To take this further, there are a number of ways it might be improved:
More diverse question generation: The current question generation approach uses a single prompt to generate questions based on the document chunks. You could experiment with different prompts or techniques to generate a wider variety of questions that test the retrieval component more thoroughly. For example, you could prompt the LLM to generate questions of different types (factual, inferential, hypothetical, etc.) or difficulty levels.
Semantic similarity metrics: In addition to checking if the expected URL is retrieved, you could calculate semantic similarity scores between the query and the retrieved documents using metrics like cosine similarity. This would give you a more nuanced view of retrieval performance beyond just binary success/failure. You could track average similarity scores and use them as a target metric to improve.
Comparative evaluation: Test out different retrieval approaches (e.g. different embedding models, similarity search algorithms, etc.) and compare their performance on the same set of queries. This would help identify the strengths and weaknesses of each approach.
Error analysis: Do a deeper dive into the failure cases to understand patterns and potential areas for improvement. Are certain types of questions consistently failing? Are there common characteristics among the documents that aren't being retrieved properly? Insights from error analysis can guide targeted improvements to the retrieval component. | user-guide | https://docs.zenml.io/user-guide/llmops-guide/evaluation/retrieval | 338 |
Google Cloud Container Registry
Storing container images in GCP.
The GCP container registry is a container registry flavor that comes built-in with ZenML and uses the Google Artifact Registry.
Important Notice: Google Container Registry is being replaced by Artifact Registry. Please start using Artifact Registry for your containers. As per Google's documentation, "after May 15, 2024, Artifact Registry will host images for the gcr.io domain in Google Cloud projects without previous Container Registry usage. After March 18, 2025, Container Registry will be shut down." The terms container registry and artifact registry will be used interchangeably throughout this document.
When to use it
You should use the GCP container registry if:
one or more components of your stack need to pull or push container images.
you have access to GCP. If you're not using GCP, take a look at the other container registry flavors.
How to deploy it
The GCP container registry (and GCP integration in general) currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration.
When using the Google Artifact Registry, you need to:
enable it here
go here and create a Docker repository.
Infrastructure Deployment
A GCP Container Registry can be deployed directly from the ZenML CLI:
zenml container-registry deploy gcp_container_registry --flavor=gcp --provider=gcp ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
How to find the registry URI
When using the Google Artifact Registry, the GCP container registry URI should have the following format:
<REGION>-docker.pkg.dev/<PROJECT_ID>/<REPOSITORY_NAME>
# Examples: | stack-components | https://docs.zenml.io/stack-components/container-registries/gcp | 404 |
ntainer │ service-principal │ │ ┃┃ │ │ 🌀 kubernetes-cluster │ access-token │ │ ┃
┃ │ │ 🐳 docker-registry │ │ │ ┃
┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨
┃ AWS Service Connector │ 🔶 aws │ 🔶 aws-generic │ implicit │ ✅ │ ✅ ┃
┃ │ │ 📦 s3-bucket │ secret-key │ │ ┃
┃ │ │ 🌀 kubernetes-cluster │ sts-token │ │ ┃
┃ │ │ 🐳 docker-registry │ iam-role │ │ ┃
┃ │ │ │ session-token │ │ ┃
┃ │ │ │ federation-token │ │ ┃
┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨
┃ GCP Service Connector │ 🔵 gcp │ 🔵 gcp-generic │ implicit │ ✅ │ ✅ ┃
┃ │ │ 📦 gcs-bucket │ user-account │ │ ┃
┃ │ │ 🌀 kubernetes-cluster │ service-account │ │ ┃
┃ │ │ 🐳 docker-registry │ oauth2-token │ │ ┃
┃ │ │ │ impersonation │ │ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 462 |
┃┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ SHARED │ ➖ ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ CREATED_AT │ 2023-06-19 19:28:31.679843 ┃
┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨
┃ UPDATED_AT │ 2023-06-19 19:28:31.679848 ┃
┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
Configuration
┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ PROPERTY │ VALUE ┃
┠───────────────────────┼────────────────────────────────────────────────────────────────────────┨
┃ region │ us-east-1 ┃
┠───────────────────────┼────────────────────────────────────────────────────────────────────────┨
┃ role_arn │ arn:aws:iam::715803424590:role/OrganizationAccountRestrictedAccessRole ┃
┠───────────────────────┼────────────────────────────────────────────────────────────────────────┨
┃ aws_access_key_id │ [HIDDEN] ┃
┠───────────────────────┼────────────────────────────────────────────────────────────────────────┨
┃ aws_secret_access_key │ [HIDDEN] ┃
┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
However, clients receive temporary STS tokens instead of the AWS Secret Key configured in the connector (note the authentication method, expiration time, and credentials): | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 489 |
board.
The Great Expectation's data profiler stepThe standard Great Expectation's data profiler step builds an Expectation Suite automatically by running a UserConfigurableProfiler on an input pandas.DataFrame dataset. The generated Expectation Suite is saved in the Great Expectations Expectation Store, but also returned as an ExpectationSuite artifact that is versioned and saved in the ZenML Artifact Store. The step automatically rebuilds the Data Docs.
At a minimum, the step configuration expects a name to be used for the Expectation Suite:
from zenml.integrations.great_expectations.steps import (
great_expectations_profiler_step,
ge_profiler_step = great_expectations_profiler_step.with_options(
parameters={
"expectation_suite_name": "steel_plates_suite",
"data_asset_name": "steel_plates_train_df",
The step can then be inserted into your pipeline where it can take in a pandas dataframe, e.g.:
from zenml import pipeline
docker_settings = DockerSettings(required_integrations=[SKLEARN, GREAT_EXPECTATIONS])
@pipeline(settings={"docker": docker_settings})
def profiling_pipeline():
"""Data profiling pipeline for Great Expectations.
The pipeline imports a reference dataset from a source then uses the builtin
Great Expectations profiler step to generate an expectation suite (i.e.
validation rules) inferred from the schema and statistical properties of the
reference dataset.
Args:
importer: reference data importer step
profiler: data profiler step
"""
dataset, _ = importer()
ge_profiler_step(dataset)
profiling_pipeline()
As can be seen from the step definition , the step takes in a pandas.DataFrame dataset, and it returns a Great Expectations ExpectationSuite object:
@step
def great_expectations_profiler_step(
dataset: pd.DataFrame,
expectation_suite_name: str,
data_asset_name: Optional[str] = None,
profiler_kwargs: Optional[Dict[str, Any]] = None,
overwrite_existing_suite: bool = True,
) -> ExpectationSuite:
... | stack-components | https://docs.zenml.io/stack-components/data-validators/great-expectations | 403 |
SDK Docs .
Enabling CUDA for GPU-backed hardwareNote that if you wish to use this step operator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousStep Operators
NextGoogle Cloud VertexAI
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/step-operators/sagemaker | 81 |