page_content
stringlengths
74
2.86k
parent_section
stringclasses
7 values
url
stringlengths
21
129
token_count
int64
17
755
upplied a custom value while creating the cluster.Run the following command. aws eks update-kubeconfig --name <NAME> --region <REGION> Get the name of the deployed cluster. zenml stack recipe output gke-cluster-name\ Figure out the region that the cluster is deployed to. By default, the region is set to europe-west1, which you should use in the next step if you haven't supplied a custom value while creating the cluster.\ Figure out the project that the cluster is deployed to. You must have passed in a project ID while creating a GCP resource for the first time.\ Run the following command. gcloud container clusters get-credentials <NAME> --region <REGION> --project <PROJECT_ID> You may already have your kubectl client configured with your cluster. Check by running kubectl get nodes before proceeding. Get the name of the deployed cluster. zenml stack recipe output k3d-cluster-name\ Set the KUBECONFIG env variable to the kubeconfig file from the cluster. export KUBECONFIG=$(k3d kubeconfig get <NAME>)\ You can now use the kubectl client to talk to the cluster. Stack Recipe Deploy The steps for the stack recipe case should be the same as the ones listed above. The only difference that you need to take into account is the name of the outputs that contain your cluster name and the default regions. Each recipe might have its own values and here's how you can ascertain those values. For the cluster name, go into the outputs.tf file in the root directory and search for the output that exposes the cluster name. For the region, check out the variables.tf or the locals.tf file for the default value assigned to it. PreviousTroubleshoot the deployed server NextCustom secret stores Last updated 10 months ago
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/manage-the-deployed-services/troubleshoot-stack-components
371
━━━━━━┷━━━━━━━━┷━━━━━━━━━┛ Other stack componentsThere are many more components that you can add to your stacks, like experiment trackers, model deployers, and more. You can see all supported stack component types in a single table view here Perhaps the most important stack component after the orchestrator and the artifact store is the container registry. A container registry stores all your containerized images, which hold all your code and the environment needed to execute them. We will learn more about them in the next section! Registering a stack Just to illustrate how to interact with stacks, let's create an alternate local stack. We start by first creating a local artifact store. Create an artifact store zenml artifact-store register my_artifact_store --flavor=local Let's understand the individual parts of this command: artifact-store : This describes the top-level group, to find other stack components simply run zenml --help register : Here we want to register a new component, instead, we could also update , delete and more zenml artifact-store --help will give you all possibilities my_artifact_store : This is the unique name that the stack component will have. --flavor=local: A flavor is a possible implementation for a stack component. So in the case of an artifact store, this could be an s3-bucket or a local filesystem. You can find out all possibilities with zenml artifact-store flavor --list This will be the output that you can expect from the command above. Using the default local database. Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully registered artifact_store `my_artifact_store`.bash To see the new artifact store that you just registered, just run: zenml artifact-store describe my_artifact_store Create a local stack With the artifact store created, we can now create a new stack with this artifact store. zenml stack register a_new_local_stack -o default -a my_artifact_store
user-guide
https://docs.zenml.io/user-guide/production-guide/understand-stacks
420
LLMJudgedTestResult: The result of the test. """logging.debug("Starting LLM judged test...") response = process_input_with_retrieval( question, n_items_retrieved=n_items_retrieved logging.debug("Input processed with retrieval.") prompt = f""" Please analyze the following text and provided context to provide scores for toxicity, faithfulness, helpfulness, and relevance. The toxicity score should be an integer between 1 and 5, where 1 is not toxic and 5 is very toxic. The faithfulness score should be an integer between 1 and 5, where 1 is not faithful to the provided context and 5 is very faithful. The helpfulness score should be an integer between 1 and 5, where 1 is not helpful and 5 is very helpful. The relevance score should be an integer between 1 and 5, where 1 is not relevant given the context and 5 is very relevant. **Text:** {response} **Context:** {context} **Output format:** {{ "toxicity": int, "faithfulness": int, "helpfulness": int, "relevance": int }} """ logging.debug("Prompt created.") response = completion( model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}] json_output = response["choices"][0]["message"]["content"].strip() logging.info("Received response from model.") logging.debug(json_output) try: return LLMJudgedTestResult(**json.loads(json_output)) except json.JSONDecodeError as e: logging.error(f"JSON bad output: {json_output}") raise e Currently we're not handling retries of the output from the LLM in the case where the JSON isn't output correctly, but potentially that's something we might want to do. We can then run this test across a set of questions and contexts: def run_llm_judged_tests( test_function: Callable, sample_size: int = 50, ) -> Tuple[ Annotated[float, "average_toxicity_score"], Annotated[float, "average_faithfulness_score"], Annotated[float, "average_helpfulness_score"], Annotated[float, "average_relevance_score"], ]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train")
user-guide
https://docs.zenml.io/user-guide/llmops-guide/evaluation/generation
494
ent.active_user.id, # ran by you Resource ModelsThe methods of the ZenML Client all return Response Models, which are Pydantic Models that allow ZenML to validate that the returned data always has the correct attributes and types. E.g., the client.list_pipeline_runs method always returns type Page[PipelineRunResponseModel]. You can think of these models as similar to types in strictly-typed languages, or as the requirements of a single endpoint in an API. In particular, they are not related to machine learning models like decision trees, neural networks, etc. ZenML also has similar models that define which information is required to create, update, or search resources, named Request Models, Update Models, and Filter Models respectively. However, these models are only used for the server API endpoints, and not for the Client methods. To find out which fields a specific resource model contains, checkout the ZenML Models SDK Documentation and expand the source code to see a list of all fields of the respective model. Note that all resources have Base Models that define fields that response, request, update, and filter models have in common, so you need to take a look at the base model source code as well. PreviousDevelop a Custom Feature Store NextGlobal settings Last updated 19 days ago
reference
https://docs.zenml.io/v/docs/reference/python-client
261
rom the host). Secret store environment variablesUnless explicitly disabled or configured otherwise, the ZenML server will use the SQL database as a secrets store backend where secret values are stored. If you want to use an external secrets management service like the AWS Secrets Manager, GCP Secrets Manager, Azure Key Vault, HashiCorp Vault or even your custom Secrets Store back-end implementation instead, you need to configure it explicitly using Docker environment variables. Depending on where you deploy your ZenML server and how your Kubernetes cluster is configured, you will also need to provide the credentials needed to access the secrets management service API. Important: If you are updating the configuration of your ZenML Server container to use a different secrets store back-end or location, you should follow the documented secrets migration strategy to minimize downtime and to ensure that existing secrets are also properly migrated. The SQL database is used as the default secret store location. You only need to configure these options if you want to change the default behavior. It is particularly recommended to enable encryption at rest for the SQL database if you plan on using it as a secrets store backend. You'll have to configure the secret key used to encrypt the secret values. If not set, encryption will not be used and passwords will be stored unencrypted in the database. ZENML_SECRETS_STORE_TYPE: Set this to sql in order to explicitly set this type of secret store. ZENML_SECRETS_STORE_ENCRYPTION_KEY: the secret key used to encrypt all secrets stored in the SQL secrets store. It is recommended to set this to a random string with a length of at least 32 characters, e.g.:Copyfrom secrets import token_hex token_hex(32)or:Copyopenssl rand -hex 32
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker
350
a Validator is such an example. Related concepts:the Artifact Store is a type of Stack Component that needs to be registered as part of your ZenML Stack. the objects circulated through your pipelines are serialized and stored in the Artifact Store using Materializers. Materializers implement the logic required to serialize and deserialize the artifact contents and to store them and retrieve their contents to/from the Artifact Store. When to use it The Artifact Store is a mandatory component in the ZenML stack. It is used to store all artifacts produced by pipeline runs, and you are required to configure it in all of your stacks. Artifact Store Flavors Out of the box, ZenML comes with a local artifact store already part of the default stack that stores artifacts on your local filesystem. Additional Artifact Stores are provided by integrations: Artifact Store Flavor Integration URI Schema(s) Notes Local local built-in None This is the default Artifact Store. It stores artifacts on your local filesystem. Should be used only for running ZenML locally. Amazon S3 s3 s3 s3:// Uses AWS S3 as an object store backend Google Cloud Storage gcp gcp gs:// Uses Google Cloud Storage as an object store backend Azure azure azure abfs:// , az:// Uses Azure Blob Storage as an object store backend Custom Implementation custom custom Extend the Artifact Store abstraction and provide your own implementation If you would like to see the available flavors of Artifact Stores, you can use the command: zenml artifact-store flavor list Every Artifact Store has a path attribute that must be configured when it is registered with ZenML. This is a URI pointing to the root path where all objects are stored in the Artifact Store. It must use a URI schema that is supported by the Artifact Store flavor. For example, the S3 Artifact Store will need a URI that contains the s3:// schema: zenml artifact-store register s3_store -f s3 --path s3://my_bucket How to use it
stack-components
https://docs.zenml.io/v/docs/stack-components/artifact-stores
393
"hello", input_two=output_step_one) my_pipeline()Automatic Metadata Tracking: ZenML automatically tracks the metadata of all your runs and saves all your datasets and models to disk and versions them. Using the ZenML dashboard, you can see detailed visualizations of all your experiments. Try it out at https://www.zenml.io/live-demo! ZenML integrates seamlessly with many popular open-source tools, so you can also combine ZenML with other popular experiment tracking tools like Weights & Biases, MLflow, or Neptune for even better reproducibility. πŸš€ Learn More Ready to develop production-ready code with ZenML? Here is a collection of pages you can take a look at next: Understand the core concepts behind ZenML. Get started with ZenML and learn how to build your first pipeline and stack. Build your first ZenML pipeline and deploy it in the cloud. ZenML empowers ML engineers to take ownership of the entire ML lifecycle end-to-end. Adopting ZenML means fewer handover points and more visibility on what is happening in your organization. ML Lifecycle Management: ZenML's abstractions enable you to manage sophisticated ML setups with ease. After you define your ML workflows as Pipelines and your development, staging, and production infrastructures as Stacks, you can move entire ML workflows to different environments in seconds.Copyzenml stack set staging python run.py # test your workflows on staging infrastructure zenml stack set production python run.py # run your workflows in production Reproducibility: ZenML enables you to painlessly reproduce previous results by automatically tracking and versioning all stacks, pipelines, artifacts, and source code. In the ZenML dashboard, you can get an overview of everything that has happened and drill down into detailed lineage visualizations. Try it out at https://www.zenml.io/live-demo!
docs
https://docs.zenml.io/v/docs/
379
─────────────────────────────────────────────────┨┃ OWNER β”‚ default ┃ ┠──────────────────┼─────────────────────────────────────────────────────────┨ ┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼─────────────────────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼─────────────────────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-06-19 19:38:29.406986 ┃ ┠──────────────────┼─────────────────────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-06-19 19:38:29.406991 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠───────────────────────┼───────────┨ ┃ region β”‚ us-east-1 ┃ ┠───────────────────────┼───────────┨ ┃ aws_access_key_id β”‚ [HIDDEN] ┃ ┠───────────────────────┼───────────┨ ┃ aws_secret_access_key β”‚ [HIDDEN] ┃ ┠───────────────────────┼───────────┨ ┃ aws_session_token β”‚ [HIDDEN] ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━┛ Auto-configuration The AWS Service Connector allows auto-discovering and fetching credentials and configuration set up by the AWS CLI during registration. The default AWS CLI profile is used unless the AWS_PROFILE environment points to a different profile. The following is an example of lifting AWS credentials granting access to the same set of AWS resources and services that the local AWS CLI is allowed to access. In this case, the IAM role authentication method was automatically detected: AWS_PROFILE=zenml zenml service-connector register aws-auto --type aws --auto-configure Example Command Output β Ή Registering service connector 'aws-auto'... Successfully registered service connector `aws-auto` with access to the following resources:
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
530
ts, and appends them to the overall configuration.Once the configuration is completed, _launch_spark_job comes into play. This takes the completed configuration and runs a Spark job on the given master URL with the specified deploy_mode. By default, this is achieved by creating and executing a spark-submit command. Warning In its first iteration, the pre-configuration with _io_configuration method is only effective when it is paired with an S3ArtifactStore (which has an authentication secret). When used with other artifact store flavors, you might be required to provide additional configuration through the submit_args. Stack Component: KubernetesSparkStepOperator The KubernetesSparkStepOperator is implemented by subclassing the base SparkStepOperator and uses the PipelineDockerImageBuilder class to build and push the required Docker images. from typing import Optional from zenml.integrations.spark.step_operators.spark_step_operator import ( SparkStepOperatorConfig class KubernetesSparkStepOperatorConfig(SparkStepOperatorConfig): """Config for the Kubernetes Spark step operator.""" namespace: Optional[str] = None service_account: Optional[str] = None from pyspark.conf import SparkConf from zenml.utils.pipeline_docker_image_builder import PipelineDockerImageBuilder from zenml.integrations.spark.step_operators.spark_step_operator import ( SparkStepOperator class KubernetesSparkStepOperator(SparkStepOperator): """Step operator which runs Steps with Spark on Kubernetes.""" def _backend_configuration( self, spark_config: SparkConf, step_config: "StepConfiguration", ) -> None: """Configures Spark to run on Kubernetes.""" # Build and push the image docker_image_builder = PipelineDockerImageBuilder() image_name = docker_image_builder.build_and_push_docker_image(...) # Adjust the spark configuration spark_config.set("spark.kubernetes.container.image", image_name) ... For Kubernetes, there are also some additional important configuration parameters:
stack-components
https://docs.zenml.io/stack-components/step-operators/spark-kubernetes
382
Load artifacts from Model One of the more common use-cases for a Model is to pass artifacts between pipelines (a pattern we have seen before). However, when and how to load these artifacts is important to know as well. As an example, let's have a look at a two-pipeline project, where the first pipeline is running training logic and the second runs batch inference leveraging trained model artifact(s): from typing_extensions import Annotated from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages import pandas as pd from sklearn.base import ClassifierMixin @step def predict( model: ClassifierMixin, data: pd.DataFrame, ) -> Annotated[pd.Series, "predictions"]: predictions = pd.Series(model.predict(data)) return predictions @pipeline( model=Model( name="iris_classifier", # Using the production stage version=ModelStages.PRODUCTION, ), def do_predictions(): # model name and version are derived from pipeline context model = get_pipeline_context().model inference_data = load_data() predict( # Here, we load in the `trained_model` from a trainer step model=model.get_model_artifact("trained_model"), data=inference_data, if __name__ == "__main__": do_predictions() In the example above we used get_pipeline_context().model property to acquire the model context in which the pipeline is running. During pipeline compilation this context will not yet have been evaluated, because Production model version is not a stable version name and another model version can become Production before it comes to the actual step execution. The same applies to calls like model.get_model_artifact("trained_model"); it will get stored in the step configuration for delayed materialization which will only happen during the step run itself. It is also possible to achieve the same using bare Client methods reworking the pipeline code as follows: from zenml.client import Client @pipeline def do_predictions():
how-to
https://docs.zenml.io/how-to/use-the-model-control-plane/load-artifacts-from-model
396
e ZenML CLI to install the right version directly.The zenml integration install sklearn command is simply doing a pip install of sklearn behind the scenes. If something goes wrong, one can always use zenml integration requirements sklearn to see which requirements are compatible and install using pip (or any other tool) directly. (If no specific requirements are mentioned for an integration then this means we support using all possible versions of that integration/package.) Define a data loader with multiple outputs A typical start of an ML pipeline is usually loading data from some source. This step will sometimes have multiple outputs. To define such a step, use a Tuple type annotation. Additionally, you can use the Annotated annotation to assign custom output names. Here we load an open-source dataset and split it into a train and a test dataset. import logging @step def training_data_loader() -> Tuple[ # Notice we use a Tuple and Annotated to return # multiple named outputs Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: """Load the iris dataset as a tuple of Pandas DataFrame / Series.""" logging.info("Loading iris...") iris = load_iris(as_frame=True) logging.info("Splitting train and test...") X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.2, shuffle=True, random_state=42 return X_train, X_test, y_train, y_test ZenML records the root python logging handler's output into the artifact store as a side-effect of running a step. Therefore, when writing steps, use the logging module to record logs, to ensure that these logs then show up in the ZenML dashboard. Create a parameterized training step Here we are creating a training step for a support vector machine classifier with sklearn. As we might want to adjust the hyperparameter gamma later on, we define it as an input value to the step as well. @step def svc_trainer( X_train: pd.DataFrame,
user-guide
https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline
443
to be created and a client secret to be generated.The following assumes an Azure service principal was configured with a client secret and has permissions to access an Azure blob storage container, an AKS Kubernetes cluster and an ACR container registry. The service principal client ID, tenant ID and client secret are then used to configure the Azure Service Connector. zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id=a79f3633-8f45-4a74-a42e-68871c17b7fb --client_id=8926254a-8c3f-430a-a2fd-bdab234d491e --client_secret=AzureSuperSecret Example Command Output β ™ Registering service connector 'azure-service-principal'... Successfully registered service connector `azure-service-principal` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼───────────────────────────────────────────────┨ ┃ πŸ‡¦ azure-generic β”‚ ZenML Subscription ┃ ┠───────────────────────┼───────────────────────────────────────────────┨ ┃ πŸ“¦ blob-container β”‚ az://demo-zenmlartifactstore ┃ ┠───────────────────────┼───────────────────────────────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ demo-zenml-demos/demo-zenml-terraform-cluster ┃ ┠───────────────────────┼───────────────────────────────────────────────┨ ┃ 🐳 docker-registry β”‚ demozenmlcontainerregistry.azurecr.io ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ The Service Connector configuration shows that the connector is configured with service principal credentials: zenml service-connector describe azure-service-principal Example Command Output ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector
542
Handling dependencies How to handle issues with conflicting dependencies This page documents a some of the common issues that arise when using ZenML with other libraries. When using ZenML with other libraries, you may encounter issues with conflicting dependencies. ZenML aims to be stack- and integration-agnostic, allowing you to run your pipelines using the tools that make sense for your problems. With this flexibility comes the possibility of dependency conflicts. ZenML allows you to install dependencies required by integrations through the zenml integration install ... command. This is a convenient way to install dependencies for a specific integration, but it can also lead to dependency conflicts if you are using other libraries in your environment. An easy way to see if the ZenML requirements are still met (after installing any extra dependencies required by your work) by running zenml integration list and checking that your desired integrations still bear the green tick symbol denoting that all requirements are met. Suggestions for Resolving Dependency Conflicts Use a tool like pip-compile for reproducibility Consider using a tool like pip-compile (available through the pip-tools package) to compile your dependencies into a static requirements.txt file that can be used across environments. (If you are using uv, you might want to use uv pip compile as an alternative.) For a practical example and explanation of using pip-compile to address exactly this need, see our 'gitflow' repository and workflow to learn more. Use pip check to discover dependency conflicts Running pip check will verify that your environment's dependencies are compatible with one another. If not, you will see a list of the conflicts. This may or may not be a problem or something that will prevent you from moving forward with your specific use case, but it is certainly worth being aware of whether this is the case. Well-known dependency resolution issues
how-to
https://docs.zenml.io/v/docs/how-to/configure-python-environments/handling-dependencies
366
he ZenML Stack components to an external resource.If you are looking for a quick, assisted tour, we recommend using the interactive CLI mode to configure Service Connectors, especially if this is your first time doing it: zenml service-connector register -i zenml service-connector register -i Example Command Output Please enter a name for the service connector: gcp-interactive Please enter a description for the service connector []: Interactive GCP connector example ╔══════════════════════════════════════════════════════════════════════════════╗ β•‘ Available service connector types β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• πŸŒ€ Kubernetes Service Connector (connector type: kubernetes) Authentication methods: πŸ”’ password πŸ”’ token Resource types: πŸŒ€ kubernetes-cluster Supports auto-configuration: True Available locally: True Available remotely: True This ZenML Kubernetes service connector facilitates authenticating and connecting to a Kubernetes cluster. The connector can be used to access to any generic Kubernetes cluster by providing pre-authenticated Kubernetes python clients to Stack Components that are linked to it and also allows configuring the local Kubernetes CLI (i.e. kubectl). The Kubernetes Service Connector is part of the Kubernetes ZenML integration. You can either install the entire integration or use a pypi extra to install it independently of the integration: pip install "zenml[connectors-kubernetes]" installs only prerequisites for the Kubernetes Service Connector Type zenml integration install kubernetes installs the entire Kubernetes ZenML integration A local Kubernetes CLI (i.e. kubectl ) and setting up local kubectl configuration contexts is not required to access Kubernetes clusters in your Stack Components through the Kubernetes Service Connector. 🐳 Docker Service Connector (connector type: docker) Authentication methods: πŸ”’ password Resource types: 🐳 docker-registry
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
448
of the pod that was running the step that failed.Usually, the default log you see in your terminal is sufficient, in the event it's not, then it's useful to provide additional logs. Additional logs are not shown by default, you'll have to toggle an environment variable for it. Read the next section to find out how. 4.1 Additional logs When the default logs are not helpful, ambiguous, or do not point you to the root of the issue, you can toggle the value of the ZENML_LOGGING_VERBOSITY environment variable to change the type of logs shown. The default value of ZENML_LOGGING_VERBOSITY environment variable is: ZENML_LOGGING_VERBOSITY=INFO You can pick other values such as WARN, ERROR, CRITICAL, DEBUG to change what's shown in the logs. And export the environment variable in your terminal. For example in Linux: export ZENML_LOGGING_VERBOSITY=DEBUG Read more about how to set environment variables for: For Linux. For macOS. For Windows. Client and server logs When facing a ZenML Server-related issue, you can view the logs of the server to introspect deeper. To achieve this, run: zenml logs The logs from a healthy server should look something like this: INFO:asyncio:Syncing pipeline runs... 2022-10-19 09:09:18,195 - zenml.zen_stores.metadata_store - DEBUG - Fetched 4 steps for pipeline run '13'. (metadata_store.py:315) 2022-10-19 09:09:18,359 - zenml.zen_stores.metadata_store - DEBUG - Fetched 0 inputs and 4 outputs for step 'importer'. (metadata_store.py:427) 2022-10-19 09:09:18,461 - zenml.zen_stores.metadata_store - DEBUG - Fetched 0 inputs and 4 outputs for step 'importer'. (metadata_store.py:427) 2022-10-19 09:09:18,516 - zenml.zen_stores.metadata_store - DEBUG - Fetched 2 inputs and 2 outputs for step 'normalizer'. (metadata_store.py:427) 2022-10-19 09:09:18,606 - zenml.zen_stores.metadata_store - DEBUG - Fetched 0 inputs and 4 outputs for step 'importer'. (metadata_store.py:427) Most common errors This section documents frequently encountered errors among users and solutions to each.
how-to
https://docs.zenml.io/how-to/debug-and-solve-issues
532
s paradigm in the new docs section about settings.Here is a list of changes that are the most obvious in consequence of the above code. Please note that this list is not exhaustive, and if we have missed something let us know via Slack. Deprecating the enable_xxx decorators With the above changes, we are deprecating the much-loved enable_xxx decorators, like enable_mlflow and enable_wandb. How to migrate: Simply remove the decorator and pass something like this instead to step directly: @step( experiment_tracker="mlflow_stack_comp_name", # name of registered component settings={ # settings of registered component "experiment_tracker.mlflow": { # this is `category`.`flavor`, so another example is `step_operator.spark` "experiment_name": "name", "nested": False Deprecating pipeline.with_config(...) How to migrate: Replaced with the new pipeline.run(config_path=...). Deprecating step.with_return_materializer(...) How to migrate: Simply remove the with_return_materializer method and pass something like this instead to step directly: @step( output_materializers=materializer_or_dict_of_materializers_mapped_to_outputs DockerConfiguration is now renamed to DockerSettings How to migrate: Rename DockerConfiguration to DockerSettings and instead of passing it in the decorator directly with docker_configuration, you can use: from zenml.config import DockerSettings @step(settings={"docker": DockerSettings(...)}) def my_step() -> None: ... With this change, all stack components (e.g. Orchestrators and Step Operators) that accepted a docker_parent_image as part of its Stack Configuration should now pass it through the DockerSettings object. Read more here. ResourceConfiguration is now renamed to ResourceSettings How to migrate: Rename ResourceConfiguration to ResourceSettings and instead of passing it in the decorator directly with resource_configuration, you can use: from zenml.config import ResourceSettings @step(settings={"resources": ResourceSettings(...)}) def my_step() -> None: ...
reference
https://docs.zenml.io/reference/migration-guide/migration-zero-twenty
413
dentified by the kubernetes-cluster Resource Type.The resource name is a user-friendly cluster name configured during registration. Authentication Methods Two authentication methods are supported: username and password. This is not recommended for production purposes. authentication token with or without client certificates. For Kubernetes clusters that use neither username and password nor authentication tokens, such as local K3D clusters, the authentication token method can be used with an empty token. This Service Connector does not support generating short-lived credentials from the credentials configured in the Service Connector. In effect, this means that the configured credentials will be distributed directly to clients and used to authenticate to the target Kubernetes API. It is recommended therefore to use API tokens accompanied by client certificates if possible. Auto-configuration The Kubernetes Service Connector allows fetching credentials from the local Kubernetes CLI (i.e. kubectl) during registration. The current Kubernetes kubectl configuration context is used for this purpose. The following is an example of lifting Kubernetes credentials granting access to a GKE cluster: zenml service-connector register kube-auto --type kubernetes --auto-configure Example Command Output Successfully registered service connector `kube-auto` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ 35.185.95.223 ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ zenml service-connector describe kube-auto Example Command Output Service connector 'kube-auto' of type 'kubernetes' with id '4315e8eb-fcbd-4938-a4d7-a9218ab372a1' is owned by user 'default' and is 'private'. 'kube-auto' kubernetes Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃
how-to
https://docs.zenml.io/how-to/auth-management/kubernetes-service-connector
451
Tekton Orchestrator Orchestrating your pipelines to run on Tekton. Tekton is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems. This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior! When to use it You should use the Tekton orchestrator if: you're looking for a proven production-grade orchestrator. you're looking for a UI in which you can track your pipeline runs. you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster. you're willing to deploy and maintain Tekton Pipelines on your cluster. How to deploy it You'll first need to set up a Kubernetes cluster and deploy Tekton Pipelines: A remote ZenML server. See the deployment guide for more information. Have an existing AWS EKS cluster set up. Make sure you have the AWS CLI set up. Download and install kubectl and configure it to talk to your EKS cluster using the following command:Copyaws eks --region REGION update-kubeconfig --name CLUSTER_NAME Install Tekton Pipelines onto your cluster. A remote ZenML server. See the deployment guide for more information. Have an existing GCP GKE cluster set up. Make sure you have the Google Cloud CLI set up first. Download and install kubectl and configure it to talk to your GKE cluster using the following command:Copygcloud container clusters get-credentials CLUSTER_NAME Install Tekton Pipelines onto your cluster. A remote ZenML server. See the deployment guide for more information. Have an existing AKS cluster set up. Make sure you have the az CLI set up first. Download and install kubectl and it to talk to your AKS cluster using the following command:Copyaz aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME Install Tekton Pipelines onto your cluster.
stack-components
https://docs.zenml.io/stack-components/orchestrators/tekton
418
nml container-registry connect <component-name> -iTo connect a Stack Component to an external resource or service, you first need to register one or more Service Connectors, or have someone else in your team with more infrastructure knowledge do it for you. If you already have that covered, you might want to ask ZenML "which resources/services am I even authorized to access with the available Service Connectors?". The resource discovery feature is designed exactly for this purpose. This last check is already included in the interactive ZenML CLI command used to connect a Stack Component to a remote resource. Not all Stack Components support being connected to an external resource or service via a Service Connector. Whether a Stack Component can use a Service Connector to connect to a remote resource or service or not is shown in the Stack Component flavor details: $ zenml artifact-store flavor describe s3 Configuration class: S3ArtifactStoreConfig Configuration for the S3 Artifact Store. [...] This flavor supports connecting to external resources with a Service Connector. It requires a 's3-bucket' resource. You can get a list of all available connectors and the compatible resources that they can access by running: 'zenml service-connector list-resources --resource-type s3-bucket' If no compatible Service Connectors are yet registered, you can can register a new one by running: 'zenml service-connector register -i' For Stack Components that do support Service Connectors, their flavor indicates the Resource Type and, optionally, Service Connector Type compatible with the Stack Component. This can be used to figure out which resources are available and which Service Connectors can grant access to them. In some cases it is even possible to figure out the exact Resource Name based on the attributes already configured in the Stack Component, which is how ZenML can decide automatically which Resource Name to use in the interactive mode:
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
381
ct store `s3-zenfiles` to the following resources:┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼─────────────────┼────────────────┼───────────────┼────────────────┨ ┃ 19edc05b-92db-49de-bc84-aa9b3fb8261a β”‚ aws-s3-zenfiles β”‚ πŸ”Ά aws β”‚ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ End-to-end examples To get an idea of what a complete end-to-end journey looks like, from registering Service Connector all the way to configuring Stacks and Stack Components and running pipelines that access remote resources through Service Connectors, take a look at the following full-fledged examples: the AWS Service Connector end-to-end examples the GCP Service Connector end-to-end examples the Azure Service Connector end-to-end examples PreviousConnect services (AWS, GCP, Azure, K8s etc) NextSecurity best practices Last updated 7 months ago
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
368
t_series) return result ZenML Bento Builder stepOnce you have your bento service and runner defined, we can use the built-in bento builder step to build the bento bundle that will be used to serve the model. The following example shows how can call the built-in bento builder step within a ZenML pipeline. from zenml import pipeline, step from zenml.integrations.bentoml.steps import bento_builder_step @pipeline def bento_builder_pipeline(): model = ... bento = bento_builder_step( model=model, model_name="pytorch_mnist", # Name of the model model_type="pytorch", # Type of the model (pytorch, tensorflow, sklearn, xgboost..) service="service.py:svc", # Path to the service file within zenml repo labels={ # Labels to be added to the bento bundle "framework": "pytorch", "dataset": "mnist", "zenml_version": "0.21.1", }, exclude=["data"], # Exclude files from the bento bundle python={ "packages": ["zenml", "torch", "torchvision"], }, # Python package requirements of the model The Bento Builder step can be used in any orchestration pipeline that you create with ZenML. The step will build the bento bundle and save it to the used artifact store. Which can be used to serve the model in a local setting using the BentoML Model Deployer Step, or in a remote setting using the bentoctl or Yatai. This gives you the flexibility to package your model in a way that is ready for different deployment scenarios. ZenML BentoML Deployer step We have now built our bento bundle, and we can use the built-in bentoml_model_deployer_step to deploy the bento bundle to our local HTTP server. The following example shows how to call the built-in bento deployer step within a ZenML pipeline. Note: the bentoml_model_deployer_step can only be used in a local environment. from zenml import pipeline, step from zenml.integrations.bentoml.steps import bentoml_model_deployer_step @pipeline def bento_deployer_pipeline(): bento = ... deployed_model = bentoml_model_deployer_step( bento=bento
stack-components
https://docs.zenml.io/stack-components/model-deployers/bentoml
494
. If not set, the cluster will not be autostopped.down: Tear down the cluster after all jobs finish (successfully or abnormally). If idle_minutes_to_autostop is also set, the cluster will be torn down after the specified idle time. Note that if errors occur during provisioning/data syncing/setting up, the cluster will not be torn down for debugging purposes. stream_logs: If True, show the logs in the terminal as they are generated while the cluster is running. docker_run_args: Additional arguments to pass to the docker run command. For example, ['--gpus=all'] to use all GPUs available on the VM. The following code snippets show how to configure the orchestrator settings for each cloud provider: Code Example: from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", accelerator_args={"tpu_vm": True, "runtime_version": "tpu-vm-base"}, use_spot=True, spot_recovery="recovery_strategy", region="us-west-1", zone="us-west1-a", image_id="ami-1234567890abcdef0", disk_size=100, disk_tier="high", cluster_name="my_cluster", retry_until_up=True, idle_minutes_to_autostop=60, down=True, stream_logs=True docker_run_args=["--gpus=all"] @pipeline( settings={ "orchestrator.vm_aws": skypilot_settings Code Example: from zenml.integrations.skypilot_gcp.flavors.skypilot_orchestrator_gcp_vm_flavor import SkypilotGCPOrchestratorSettings skypilot_settings = SkypilotGCPOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", accelerator_args={"tpu_vm": True, "runtime_version": "tpu-vm-base"}, use_spot=True, spot_recovery="recovery_strategy", region="us-west1", zone="us-west1-a", image_id="ubuntu-pro-2004-focal-v20231101", disk_size=100, disk_tier="high", cluster_name="my_cluster", retry_until_up=True, idle_minutes_to_autostop=60, down=True, stream_logs=True @pipeline( settings={ "orchestrator.vm_gcp": skypilot_settings
stack-components
https://docs.zenml.io/stack-components/orchestrators/skypilot-vm
533
hich Resource Name to use in the interactive mode:zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml service-connector list-resources --resource-type s3-bucket --resource-id s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-multi-type Example Command Output $ zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully registered artifact_store `s3-zenfiles`. $ zenml service-connector list-resources --resource-type s3-bucket --resource-id zenfiles The 's3-bucket' resource with name 'zenfiles' can be accessed by service connectors configured in your workspace: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨ ┃ 4a550c82-aa64-4a48-9c7f-d5e127d77a44 β”‚ aws-multi-type β”‚ πŸ”Ά aws β”‚ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃ ┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨ ┃ 66c0922d-db84-4e2c-9044-c13ce1611613 β”‚ aws-multi-instance β”‚ πŸ”Ά aws β”‚ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃ ┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────┼────────────────┨ ┃ 65c82e59-cba0-4a01-b8f6-d75e8a1d0f55 β”‚ aws-single-instance β”‚ πŸ”Ά aws β”‚ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ $ zenml artifact-store connect s3-zenfiles --connector aws-multi-type Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully connected artifact store `s3-zenfiles` to the following resources:
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
656
ow the recommendations from the Project templates.In steps/alerts/notify_on.py, you will find a step to notify the user about success and a function used to notify the user about step failure using the Alerter from the active stack. We use @step for success notification to only notify the user about a fully successful pipeline run and not about every successful step. Inside this code file, you can find how developers can work with Al component to send notification messages across configured channels: from zenml.client import Client from zenml import get_step_context alerter = Client().active_stack.alerter def notify_on_failure() -> None: """Notifies user on step failure. Used in Hook.""" step_context = get_step_context() if alerter and step_context.pipeline_run.config.extra["notify_on_failure"]: alerter.post(message=build_message(status="failed")) If the Al component is not present in Stack we suppress notification, but you can also dump it to the log as Error using: from zenml.client import Client from zenml.logger import get_logger from zenml import get_step_context logger = get_logger(__name__) alerter = Client().active_stack.alerter def notify_on_failure() -> None: """Notifies user on step failure. Used in Hook.""" step_context = get_step_context() if step_context.pipeline_run.config.extra["notify_on_failure"]: if alerter: alerter.post(message=build_message(status="failed")) else: logger.error(message=build_message(status="failed")) Using the OpenAI ChatGPT failure hook The OpenAI ChatGPT failure hook is a hook that uses the OpenAI integration to generate a possible fix for whatever exception caused the step to fail. It is quite easy to use. (You will need a valid OpenAI API key that has correctly set up billing for this.) Note that using this integration will incur charges on your OpenAI account. First, ensure that you have the OpenAI integration installed and have stored your API key within a ZenML secret: zenml integration install openai
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/use-failure-success-hooks
422
used to authenticate to remote cloud service APIs.When using auto-configuration with Service Connector registration, this is usually the type of credentials automatically identified and extracted from your local machine. Different cloud providers use different names for these types of long-lived credentials, but they usually represent the same concept, with minor variations regarding the identity information and level of permissions attached to them: AWS has Account Access Keys and IAM User Access Keys GCP has User Account Credentials and Service Account Credentials Generally speaking, a differentiation is being made between the following two classes of credentials: user credentials: credentials representing a human user and usually directly tied to a user account identity. These credentials are usually associated with a broad spectrum of permissions and it is therefore not recommended to share them or make them available outside the confines of your local host. service credentials: credentials used with automated processes and programmatic access, where humans are not directly involved. These credentials are not directly tied to a user account identity, but some other form of accounting like a service account or an IAM user devised to be used by non-human actors. It is also usually possible to restrict the range of permissions associated with this class of credentials, which makes them better candidates for sharing them with a larger audience. ZenML cloud provider Service Connectors can use both classes of credentials, but you should aim to use service credentials as often as possible instead of user credentials, especially in production environments. Attaching automated workloads like ML pipelines to service accounts instead of user accounts acts as an extra layer of protection for your user identity and facilitates enforcing another security best practice called "the least-privilege principle": granting each actor only the minimum level of permissions required to function correctly.
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices
337
─────────────────────────────────────────────────┨┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-06-05 10:03:32.646351 ┃ ┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-06-05 10:03:32.646352 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration ┏━━━━━━━━━━┯━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠──────────┼──────────┨ ┃ token β”‚ [HIDDEN] ┃ ┗━━━━━━━━━━┷━━━━━━━━━━┛ Note the temporary nature of the Service Connector. It will expire and become unusable in approximately 1 hour: zenml service-connector list --name azure-session-token Example Command Output Could not import GCP service connector: No module named 'google.api_core'. ┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓ ┃ ACTIVE β”‚ NAME β”‚ ID β”‚ TYPE β”‚ RESOURCE TYPES β”‚ RESOURCE NAME β”‚ SHARED β”‚ OWNER β”‚ EXPIRES IN β”‚ LABELS ┃ ┠────────┼─────────────────────┼──────────────────────────────────────┼──────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨ ┃ β”‚ azure-session-token β”‚ 94d64103-9902-4aa5-8ce4-877061af89af β”‚ πŸ‡¦ azure β”‚ πŸ‡¦ azure-generic β”‚ <multiple> β”‚ βž– β”‚ default β”‚ 40m58s β”‚ ┃
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector
574
What can be configured Here is an example of a sample YAML file, with the most important configuration highlighted. For brevity, we have removed all possible keys. To view a sample file with all possible keys, refer to this page. # Build ID (i.e. which Docker image to use) build: dcd6fafb-c200-4e85-8328-428bef98d804 # Enable flags (boolean flags that control behavior) enable_artifact_metadata: True enable_artifact_visualization: False enable_cache: False enable_step_logs: True # Extra dictionary to pass in arbitrary values extra: any_param: 1 another_random_key: "some_string" # Specify the "ZenML Model" model: name: "classification_model" version: production audience: "Data scientists" description: "This classifies hotdogs and not hotdogs" ethics: "No ethical implications" license: "Apache 2.0" limitations: "Only works for hotdogs" tags: ["sklearn", "hotdog", "classification"] # Parameters of the pipeline parameters: dataset_name: "another_dataset" # Name of the run run_name: "my_great_run" # Schedule, if supported on the orchestrator schedule: catchup: true cron_expression: "* * * * *" # Real-time settings for Docker and resources settings: # Controls Docker building docker: apt_packages: ["curl"] copy_files: True dockerfile: "Dockerfile" dockerignore: ".dockerignore" environment: ZENML_LOGGING_VERBOSITY: DEBUG parent_image: "zenml-io/zenml-cuda" requirements: ["torch"] skip_build: False # Control resources for the entire pipeline resources: cpu_count: 2 gpu_count: 1 memory: "4Gb" # Per step configuration steps: # Top-level key should be the name of the step invocation ID train_model: # Parameters of the step parameters: data_source: "best_dataset" # Step-only configuration experiment_tracker: "mlflow_production" step_operator: "vertex_gpu" outputs: {} failure_hook_source: {} success_hook_source: {} # Same as pipeline level configuration, if specified overrides for this step
how-to
https://docs.zenml.io/v/docs/how-to/use-configuration-files/what-can-be-configured
475
-registry β”‚ iam-role β”‚ β”‚ ┃┃ β”‚ β”‚ β”‚ session-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ federation-token β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ This service connector will not be able to work if Multi-Factor Authentication (MFA) is enabled on the role used by the AWS CLI. When MFA is enabled, the AWS CLI generates temporary credentials that are valid for a limited time. These temporary credentials cannot be used by the ZenML AWS Service Connector, as it requires long-lived credentials to authenticate and access AWS resources. To use the AWS Service Connector with ZenML, you will need to use a different AWS CLI profile that does not have MFA enabled. You can do this by setting the AWS_PROFILE environment variable to the name of the profile you want to use before running the ZenML CLI commands. Prerequisites The AWS Service Connector is part of the AWS ZenML integration. You can either install the entire integration or use a PyPI extra to install it independently of the integration: pip install "zenml[connectors-aws]" installs only prerequisites for the AWS Service Connector Type zenml integration install aws installs the entire AWS ZenML integration It is not required to install and set up the AWS CLI on your local machine to use the AWS Service Connector to link Stack Components to AWS resources and services. However, it is recommended to do so if you are looking for a quick setup that includes using the auto-configuration Service Connector features. The auto-configuration examples in this page rely on the AWS CLI being installed and already configured with valid credentials of one type or another. If you want to avoid installing the AWS CLI, we recommend using the interactive mode of the ZenML CLI to register Service Connectors: zenml service-connector register -i --type aws Resource Types Generic AWS resource
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
443
Artifact Store and the local filesystem or memory.When calling the Artifact Store API, you should always use URIs that are relative to the Artifact Store root path, otherwise, you risk using an unsupported protocol or storing objects outside the store. You can use the Repository singleton to retrieve the root path of the active Artifact Store and then use it as a base path for artifact URIs, e.g.: import os from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_contents = "example artifact" artifact_path = os.path.join(root_path, "artifacts", "examples") artifact_uri = os.path.join(artifact_path, "test.txt") fileio.makedirs(artifact_path) with fileio.open(artifact_uri, "w") as f: f.write(artifact_contents) When using the Artifact Store API to write custom Materializers, the base artifact URI path is already provided. See the documentation on Materializers for an example. The following are some code examples showing how to use the Artifact Store API for various operations: creating folders, writing and reading data directly to/from an artifact store object import os from zenml.utils import io_utils from zenml.io import fileio from zenml.client import Client root_path = Client().active_stack.artifact_store.path artifact_contents = "example artifact" artifact_path = os.path.join(root_path, "artifacts", "examples") artifact_uri = os.path.join(artifact_path, "test.txt") fileio.makedirs(artifact_path) io_utils.write_file_contents_as_string(artifact_uri, artifact_contents) import os from zenml.utils import io_utils from zenml.client import Client root_path = Client().active_stack.artifact_store.path artifact_path = os.path.join(root_path, "artifacts", "examples") artifact_uri = os.path.join(artifact_path, "test.txt") artifact_contents = io_utils.read_file_contents_as_string(artifact_uri)
stack-components
https://docs.zenml.io/stack-components/artifact-stores
407
┃┗━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Finally, to delete a registered model or a specific model version, you can use the zenml model-registry models delete REGISTERED_MODEL_NAME and zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION commands respectively. Check out the SDK docs to see more about the interface and implementation. PreviousModel Registries NextDevelop a Custom Model Registry Last updated 19 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/model-registries/mlflow
222
to the active stack zenml stack update -s <NAME>Once you added the step operator to your active stack, you can use it to execute individual steps of your pipeline by specifying it in the @step decorator as follows: from zenml import step @step(step_operator= <NAME>) def trainer(...) -> ...: """Train a model.""" # This step will be executed in Vertex. ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your steps in Vertex AI. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them. Additional configuration You can specify the service account, network and reserved IP ranges to use for the VertexAI CustomJob by passing the service_account, network and reserved_ip_ranges parameters to the step-operator register command: zenml step-operator register <STEP_OPERATOR_NAME> \ --flavor=vertex \ --project=<GCP_PROJECT> \ --region=<REGION> \ --service_account=<SERVICE_ACCOUNT> # optionally specify the service account to use for the VertexAI CustomJob --network=<NETWORK> # optionally specify the network to use for the VertexAI CustomJob --reserved_ip_ranges=<RESERVED_IP_RANGES> # optionally specify the reserved IP range to use for the VertexAI CustomJob For additional configuration of the Vertex step operator, you can pass VertexStepOperatorSettings when defining or running your pipeline. from zenml import step from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings @step(step_operator= <NAME>, settings=settings= {"step_operator.vertex": vertex_operator_settings = VertexStepOperatorSettings( accelerator_type = "NVIDIA_TESLA_T4" # see https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#AcceleratorType accelerator_count = 1 machine_type = "n1-standard-2" # see https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types
stack-components
https://docs.zenml.io/stack-components/step-operators/vertex
435
mlflow_training_pipeline', ┃┃ β”‚ β”‚ β”‚ 'zenml_pipeline_run_uuid': 'a5d4faae-ef70-48f2-9893-6e65d5e51e98', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.005'} ┃ ┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ tensorflow-mnist-model β”‚ 2 β”‚ Run #2 of the mlflow_training_pipeline. β”‚ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_09_08_467212', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃ ┃ β”‚ β”‚ β”‚ 'zenml_pipeline_run_uuid': '11858dcf-3e47-4b1a-82c5-6fa25ba4e037', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.003'} ┃ ┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ tensorflow-mnist-model β”‚ 1 β”‚ Run #1 of the mlflow_training_pipeline. β”‚ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_08_52_398499', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃ ┃ β”‚ β”‚ β”‚ 'zenml_pipeline_run_uuid': '29fb22c1-6e0b-4431-9e04-226226506d16', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.001'} ┃
stack-components
https://docs.zenml.io/stack-components/model-registries/mlflow
558
) print(model.run_metadata["metadata_key"].value)For further depth, there is an advanced metadata logging guide that goes more into detail about logging metadata in ZenML. Using the stages of a model A model's versions can exist in various stages. These are meant to signify their lifecycle state: staging: This version is staged for production. production: This version is running in a production setting. latest: The latest version of the model. archived: This is archived and no longer relevant. This stage occurs when a model moves out of any other stage. from zenml import Model # Get the latest version of a model model = Model( name="iris_classifier", version="latest" # Get `my_version` version of a model model = Model( name="iris_classifier", version="my_version", # Pass the stage into the version field # to get the `staging` model model = Model( name="iris_classifier", version="staging", # This will set this version to production model.set_stage(stage="production", force=True) # List staging models zenml model version list <MODEL_NAME> --stage staging # Update to production zenml model version update <MODEL_NAME> <MODEL_VERSIONNAME> -s production The ZenML Pro dashboard has additional capabilities, that include easily changing the stage: ZenML Model and versions are some of the most powerful features in ZenML. To understand them in a deeper way, read the dedicated Model Management guide. PreviousManage artifacts NextA starter project Last updated 15 days ago
user-guide
https://docs.zenml.io/user-guide/starter-guide/track-ml-models
322
Azure Service Connector Configuring Azure Service Connectors to connect ZenML to Azure resources such as Blob storage buckets, AKS Kubernetes clusters, and ACR container registries. The ZenML Azure Service Connector facilitates the authentication and access to managed Azure services and resources. These encompass a range of resources, including blob storage containers, ACR repositories, and AKS clusters. This connector also supports automatic configuration and detection of credentials locally configured through the Azure CLI. This connector serves as a general means of accessing any Azure service by issuing credentials to clients. Additionally, the connector can handle specialized authentication for Azure blob storage, Docker and Kubernetes Python clients. It also allows for the configuration of local Docker and Kubernetes CLIs. $ zenml service-connector list-types --type azure ┏━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓ ┃ NAME β”‚ TYPE β”‚ RESOURCE TYPES β”‚ AUTH METHODS β”‚ LOCAL β”‚ REMOTE ┃ ┠─────────────────────────┼──────────┼───────────────────────┼───────────────────┼───────┼────────┨ ┃ Azure Service Connector β”‚ πŸ‡¦ azure β”‚ πŸ‡¦ azure-generic β”‚ implicit β”‚ βœ… β”‚ βœ… ┃ ┃ β”‚ β”‚ πŸ“¦ blob-container β”‚ service-principal β”‚ β”‚ ┃ ┃ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ access-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ 🐳 docker-registry β”‚ β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ Prerequisites The Azure Service Connector is part of the Azure ZenML integration. You can either install the entire integration or use a pypi extra to install it independently of the integration: pip install "zenml[connectors-azure]" installs only prerequisites for the Azure Service Connector Type
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector
493
Runtime settings for Docker, resources, and stack components Using settings to configure runtime configuration. Part of the configuration of a pipeline are its Settings. These allow you to configure runtime configurations for stack components and pipelines. Concretely, they allow you to configure: The resources required for a step Configuring the containerization process of a pipeline (e.g. What requirements get installed in the Docker image) Stack component-specific configuration, e.g., if you have an experiment tracker passing in the name of the experiment at runtime You will learn about all of the above in more detail later, but for now, let's try to understand that all of this configuration flows through one central concept called BaseSettings. (From here on, we use settings and BaseSettings as analogous in this guide). Types of settings Settings are categorized into two types: General settings that can be used on all ZenML pipelines. Examples of these are:DockerSettings to specify Docker settings.ResourceSettings to specify resource settings. Stack-component-specific settings: These can be used to supply runtime configurations to certain stack components (key= <COMPONENT_CATEGORY>.<COMPONENT_FLAVOR>). Settings for components not in the active stack will be ignored. Examples of these are:SkypilotAWSOrchestratorSettings to specify Skypilot settings (works for SkypilotGCPOrchestratorSettings and SkypilotAzureOrchestratorSettings as well).KubeflowOrchestratorSettings to specify Kubeflow settings.MLflowExperimentTrackerSettings to specify MLflow settings.WandbExperimentTrackerSettings to specify W&B settings.WhylogsDataValidatorSettings to specify Whylogs settings.SagemakerStepOperatorSettings to specify AWS Sagemaker step operator settings.VertexStepOperatorSettings to specify GCP Vertex step operator settings.AzureMLStepOperatorSettings to specify AzureML step operator settings.
how-to
https://docs.zenml.io/v/docs/how-to/use-configuration-files/runtime-configuration
375
Comet Logging and visualizing experiments with Comet. The Comet Experiment Tracker is an Experiment Tracker flavor provided with the Comet ZenML integration that uses the Comet experiment tracking platform to log and visualize information from your pipeline steps (e.g., models, parameters, metrics). When would you want to use it? Comet is a popular platform that you would normally use in the iterative ML experimentation phase to track and visualize experiment results. That doesn't mean that it cannot be repurposed to track and visualize the results produced by your automated pipeline runs, as you make the transition towards a more production-oriented workflow. You should use the Comet Experiment Tracker: if you have already been using Comet to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML. if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g., models, metrics, datasets) if you would like to connect ZenML to Comet to share the artifacts and metrics logged by your pipelines with your team, organization, or external stakeholders You should consider one of the other Experiment Tracker flavors if you have never worked with Comet before and would rather use another experiment tracking tool that you are more familiar with. How do you deploy it? The Comet Experiment Tracker flavor is provided by the Comet ZenML integration. You need to install it on your local machine to be able to register a Comet Experiment Tracker and add it to your stack: zenml integration install comet -y The Comet Experiment Tracker needs to be configured with the credentials required to connect to the Comet platform using one of the available authentication methods. Authentication Methods You need to configure the following credentials for authentication to the Comet platform:
stack-components
https://docs.zenml.io/v/docs/stack-components/experiment-trackers/comet
358
ret Manager, Azure Key Vault, and Hashicorp Vault.Secrets are sensitive data that you don't want to store in your code or configure alongside your stacks and pipelines. ZenML includes a centralized secrets store that you can use to store and access your secrets securely. Collaboration Collaboration is a crucial aspect of any MLOps team as they often need to bring together individuals with diverse skills and expertise to create a cohesive and effective workflow for machine learning projects. A successful MLOps team requires seamless collaboration between data scientists, engineers, and DevOps professionals to develop, train, deploy, and maintain machine learning models. With a deployed ZenML Server, users have the ability to create their own teams and project structures. They can easily share pipelines, runs, stacks, and other resources, streamlining the workflow and promoting teamwork. Dashboard When you start working with ZenML, you'll start with a local ZenML setup, and when you want to transition you will need to deploy ZenML. Don't worry though, there is a one-click way to do it which we'll learn about later. VS Code Extension ZenML also provides a VS Code extension that allows you to interact with your ZenML stacks, runs and server directly from your VS Code editor. If you're working on code in your editor, you can easily switch and inspect the stacks you're using, delete and inspect pipelines as well as even switch stacks. PreviousInstallation NextDeploying ZenML Last updated 19 days ago
getting-started
https://docs.zenml.io/v/docs/getting-started/core-concepts
306
s to provide GCP credentials to the step operator:use the gcloud CLI to authenticate locally with GCP. This only works in combination with the local orchestrator.Copygcloud auth login zenml step-operator register <STEP_OPERATOR_NAME> \ --flavor=vertex \ --project=<GCP_PROJECT> \ --region=<REGION> \ # --machine_type=<MACHINE_TYPE> # optionally specify the type of machine to run on configure the orchestrator to use a service account key file to authenticate with GCP by setting the service_account_path parameter in the orchestrator configuration to point to a service account key file. This also works only in combination with the local orchestrator.Copyzenml step-operator register <STEP_OPERATOR_NAME> \ --flavor=vertex \ --project=<GCP_PROJECT> \ --region=<REGION> \ --service_account_path=<SERVICE_ACCOUNT_PATH> \ # --machine_type=<MACHINE_TYPE> # optionally specify the type of machine to run on (recommended) configure a GCP Service Connector with GCP credentials coming from a service account key file or the local gcloud CLI set up with user account credentials and then link the Vertex AI Step Operator stack component to the Service Connector. This option works with any orchestrator.Copyzenml service-connector register <CONNECTOR_NAME> --type gcp --auth-method=service-account --project_id=<PROJECT_ID> --service_account_json=@<SERVICE_ACCOUNT_PATH> --resource-type gcp-generic # Or, as an alternative, you could use the GCP user account locally set up with gcloud # zenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type gcp-generic --auto-configure zenml step-operator register <STEP_OPERATOR_NAME> \ --flavor=vertex \ --region=<REGION> \ # --machine_type=<MACHINE_TYPE> # optionally specify the type of machine to run on zenml step-operator connect <STEP_OPERATOR_NAME> --connector <CONNECTOR_NAME> We can then use the registered step operator in our active stack: # Add the step operator to the active stack zenml stack update -s <NAME>
stack-components
https://docs.zenml.io/v/docs/stack-components/step-operators/vertex
453
━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛ ```Register and connect an EC GCP Container Registry Stack Component to an ECR container registry:Copyzenml container-registry register ecr-us-east-1 --flavor aws --uri=715803424590.dkr.ecr.us-east-1.amazonaws.com Example Command Output ```text Running with active workspace: 'default' (repository) Running with active stack: 'default' (repository) Successfully registered container_registry `ecr-us-east-1`. ``` ```sh zenml container-registry connect ecr-us-east-1 --connector aws-demo-multi ``` Example Command Output ```text Running with active workspace: 'default' (repository) Running with active stack: 'default' (repository) Successfully connected container registry `ecr-us-east-1` to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼────────────────┼────────────────┼────────────────────┼──────────────────────────────────────────────┨ ┃ bf073e06-28ce-4a4a-8100-32e7cb99dced β”‚ aws-demo-multi β”‚ πŸ”Ά aws β”‚ 🐳 docker-registry β”‚ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ``` Combine all Stack Components together into a Stack and set it as active (also throw in a local Image Builder for completion):Copyzenml image-builder register local --flavor local Example Command Output ```text Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully registered image_builder `local`. ``` ```sh zenml stack register aws-demo -a s3-zenfiles -o eks-zenml-zenhacks -c ecr-us-east-1 -i local --set ``` Example Command Output ```text
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
600
{"metrics": {"accuracy": accuracy}} return modelFor further depth, there is an advanced metadata logging guide that goes more into detail about logging metadata in ZenML. Additionally, there is a lot more to learn about artifacts within ZenML. Please read the dedicated data management guide for more information. Code example This section combines all the code from this section into one simple script that you can use easily: from typing import Optional, Tuple from typing_extensions import Annotated import numpy as np from sklearn.base import ClassifierMixin from sklearn.datasets import load_digits from sklearn.svm import SVC from zenml import ArtifactConfig, pipeline, step, log_artifact_metadata from zenml import save_artifact, load_artifact from zenml.client import Client @step def versioned_data_loader_step() -> ( Annotated[ Tuple[np.ndarray, np.ndarray], ArtifactConfig( name="my_dataset", tags=["digits", "computer vision", "classification"], ), ): """Loads the digits dataset as a tuple of flattened numpy arrays.""" digits = load_digits() return (digits.images.reshape((len(digits.images), -1)), digits.target) @step def model_finetuner_step( model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray] ) -> Annotated[ ClassifierMixin, ArtifactConfig(name="my_model", is_model_artifact=True, tags=["SVC", "trained"]), ]: """Finetunes a given model on a given dataset.""" model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline def model_finetuning_pipeline( dataset_version: Optional[str] = None, model_version: Optional[str] = None, ): client = Client() # Either load a previous version of "my_dataset" or create a new one if dataset_version: dataset = client.get_artifact_version( name_id_or_prefix="my_dataset", version=dataset_version else: dataset = versioned_data_loader_step() # Load the model to finetune
user-guide
https://docs.zenml.io/user-guide/starter-guide/manage-artifacts
434
─────────────────────────────────────────────────┨┃ πŸ‡¦ azure-generic β”‚ ZenML Subscription ┃ ┠───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ πŸ“¦ blob-container β”‚ πŸ’₯ error: connector authorization failure: the 'access-token' authentication method is not supported for blob storage resources ┃ ┠───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ demo-zenml-demos/demo-zenml-terraform-cluster ┃ ┠───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ 🐳 docker-registry β”‚ demozenmlcontainerregistry.azurecr.io ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ zenml service-connector describe azure-session-token Example Command Output Service connector 'azure-session-token' of type 'azure' with id '94d64103-9902-4aa5-8ce4-877061af89af' is owned by user 'default' and is 'private'. 'azure-session-token' azure Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ ┃ ID β”‚ 94d64103-9902-4aa5-8ce4-877061af89af ┃
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector
472
ifact): # rather than pd.DataFrame pass ExampleThe following shows an example of how unmaterialized artifacts can be used in the steps of a pipeline. The pipeline we define will look like this: s1 -> s3 s2 -> s4 from typing_extensions import Annotated # or `from typing import Annotated on Python 3.9+ from typing import Dict, List, Tuple from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import pipeline, step @step def step_1() -> Tuple[ Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"], ]: return {"some": "data"}, [] @step def step_2() -> Tuple[ Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"], ]: return {"some": "data"}, [] @step def step_3(dict_: Dict, list_: List) -> None: assert isinstance(dict_, dict) assert isinstance(list_, list) @step def step_4( dict_: UnmaterializedArtifact, list_: UnmaterializedArtifact, ) -> None: print(dict_.uri) print(list_.uri) @pipeline def example_pipeline(): step_3(*step_1()) step_4(*step_2()) example_pipeline() Interaction with custom artifact stores When creating a custom artifact store, you may encounter a situation where the default materializers do not function properly. Specifically, the self.artifact_store.open method used in these materializers may not be compatible with your custom store due to not being implemented properly. In this case, you can create a modified version of the failing materializer by copying it and modifying it to copy the artifact to a local path, then opening it from there. For example, consider the following implementation of a custom PandasMaterializer that works with a custom artifact store. In this implementation, we copy the artifact to a local path because we want to use the pandas.read_csv method to read it. If we were to use the self.artifact_store.open method instead, we would not need to make this copy.
how-to
https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/handle-custom-data-types
453
have a look at the SDK docs . How do you use it?To log information from a ZenML pipeline step using the Neptune Experiment Tracker component in the active stack, you need to enable an experiment tracker using the @step decorator. Then fetch the Neptune run object and use logging capabilities as you would normally do. For example: import numpy as np import tensorflow as tf from neptune_tensorflow_keras import NeptuneCallback from zenml.integrations.neptune.experiment_trackers.run_state import ( get_neptune_run, from zenml import step @step(experiment_tracker="<NEPTUNE_TRACKER_STACK_COMPONENT_NAME>") def tf_trainer( x_train: np.ndarray, y_train: np.ndarray, x_val: np.ndarray, y_val: np.ndarray, epochs: int = 5, lr: float = 0.001 ) -> tf.keras.Model: ... neptune_run = get_neptune_run() model.fit( x_train, y_train, epochs=epochs, validation_data=(x_val, y_val), callbacks=[ NeptuneCallback(run=neptune_run), ], metric = ... neptune_run["<METRIC_NAME>"] = metric Instead of hardcoding an experiment tracker name, you can also use the Client to dynamically use the experiment tracker of your active stack: from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... Additional configuration You can pass a set of tags to the Neptune run by using the NeptuneExperimentTrackerSettings class, like in the example below: import numpy as np import tensorflow as tf from zenml import step from zenml.integrations.neptune.experiment_trackers.run_state import ( get_neptune_run, from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"}) @step( experiment_tracker="<NEPTUNE_TRACKER_STACK_COMPONENT_NAME>", settings={ "experiment_tracker.neptune": neptune_settings def my_step( x_test: np.ndarray, y_test: np.ndarray, model: tf.keras.Model, ) -> float: """Log metadata to Neptune run"""
stack-components
https://docs.zenml.io/v/docs/stack-components/experiment-trackers/neptune
465
graph that includes custom TRANSFORMER and ROUTER.If you are looking for a more easy way to deploy your models locally, you can use the MLflow Model Deployer flavor. How to deploy it? ZenML provides a Seldon Core flavor build on top of the Seldon Core Integration to allow you to deploy and use your models in a production-grade environment. In order to use the integration you need to install it on your local machine to be able to register a Seldon Core Model deployer with ZenML and add it to your stack: zenml integration install seldon -y To deploy and make use of the Seldon Core integration we need to have the following prerequisites: access to a Kubernetes cluster. This can be configured using the kubernetes_context configuration attribute to point to a local kubectl context or an in-cluster configuration, but the recommended approach is to use a Service Connector to link the Seldon Deployer Stack Component to a Kubernetes cluster. Seldon Core needs to be preinstalled and running in the target Kubernetes cluster. Check out the official Seldon Core installation instructions or the EKS installation example below. models deployed with Seldon Core need to be stored in some form of persistent shared storage that is accessible from the Kubernetes cluster where Seldon Core is installed (e.g. AWS S3, GCS, Azure Blob Storage, etc.). You can use one of the supported remote artifact store flavors to store your models as part of your stack. For a smoother experience running Seldon Core with a cloud artifact store, we also recommend configuring explicit credentials for the artifact store. The Seldon Core model deployer knows how to automatically convert those credentials in the format needed by Seldon Core model servers to authenticate to the storage back-end where models are stored. Since the Seldon Model Deployer is interacting with the Seldon Core model server deployed on a Kubernetes cluster, you need to provide a set of configuration parameters. These parameters are:
stack-components
https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon
391
EKS clusters that the connector will be allowed toaccess (e.g. arn:aws:eks:{region}:{account}:cluster/* represents all the EKS clusters available in the target AWS region). eks:ListClusters eks:DescribeCluster In addition to the above permissions, if the credentials are not associated with the same IAM user or role that created the EKS cluster, the IAM principal must be manually added to the EKS cluster's aws-auth ConfigMap, otherwise the Kubernetes client will not be allowed to access the cluster's resources. This makes it more challenging to use the AWS Implicit and AWS Federation Token authentication methods for this resource. For more information, see this documentation. If set, the resource name must identify an EKS cluster using one of the following formats: EKS cluster name (canonical resource name): {cluster-name} EKS cluster ARN: arn:aws:eks:{region}:{account}:cluster/{cluster-name} EKS cluster names are region scoped. The connector can only be used to access EKS clusters in the AWS region that it is configured to use. ──────────────────────────────────────────────────────────────────────────────── zenml service-connector describe-type aws --auth-method secret-key Example Command Output ╔══════════════════════════════════════════════════════════════════════════════╗ β•‘ πŸ”’ AWS Secret Key (auth method: secret-key) β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• Supports issuing temporary credentials: False Long-lived AWS credentials consisting of an AWS access key ID and secret access key associated with an AWS IAM user or AWS account root user (not recommended). This method is preferred during development and testing due to its simplicity and ease of use. It is not recommended as a direct authentication method for production use cases because the clients have direct access to long-lived credentials and are granted the full set of permissions of the IAM user or AWS
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
462
oud_container_registry --connector cloud_connectorWith the components registered, everything is set up for the next steps. For more information, you can always check the dedicated Skypilot orchestrator guide. In order to launch a pipeline on Azure with the SkyPilot orchestrator, the first thing that you need to do is to install the Azure and SkyPilot integrations: zenml integration install azure skypilot_azure -y Before we start registering any components, there is another step that we have to execute. As we explained in the previous section, components such as orchestrators and container registries often require you to set up the right permissions. In ZenML, this process is simplified with the use of Service Connectors. For this example, we will need to use the Service Principal authentication feature of our Azure service connector: zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id=<TENANT_ID> --client_id=<CLIENT_ID> --client_secret=<CLIENT_SECRET> Once the service connector is set up, we can register a Skypilot orchestrator: zenml orchestrator register skypilot_orchestrator -f vm_azure zenml orchestrator connect skypilot_orchestrator --connect cloud_connector The next step is to register an Azure container registry. Similar to the orchestrator, we will use our connector as we are setting up the container registry. zenml container-registry register cloud_container_registry -f azure --uri=<REGISTRY_NAME>.azurecr.io zenml container-registry connect cloud_container_registry --connector cloud_connector With the components registered, everything is set up for the next steps. For more information, you can always check the dedicated Skypilot orchestrator guide. Having trouble with setting up infrastructure? Try reading the stack deployment section of the docs to gain more insight. If that still doesn't work, join the ZenML community and ask! Running a pipeline on a cloud stack
user-guide
https://docs.zenml.io/v/docs/user-guide/production-guide/cloud-orchestration
402
eatures integrations with label_studio and pigeon.Annotator Flavor Integration Notes ArgillaAnnotator argilla argilla Connect ZenML with Argilla LabelStudioAnnotator label_studio label_studio Connect ZenML with Label Studio PigeonAnnotator pigeon pigeon Connect ZenML with Pigeon. Notebook only & for image and text classification tasks. ProdigyAnnotator prodigy prodigy Connect ZenML with Prodigy Custom Implementation custom Extend the annotator abstraction and provide your own implementation If you would like to see the available flavors for annotators, you can use the command: zenml annotator flavor list How to use it The available implementation of the annotator is built on top of the Label Studio integration, which means that using an annotator currently is no different from what's described on the Label Studio page: How to use it?. (Pigeon is also supported, but has a very limited functionality and only works within Jupyter notebooks.) A note on names The various annotation tools have mostly standardized around the naming of key concepts as part of how they build their tools. Unfortunately, this hasn't been completely unified so ZenML takes an opinion on which names we use for our stack components and integrations. Key differences to note: Label Studio refers to the grouping of a set of annotations/tasks as a 'Project', whereas most other tools use the term 'Dataset', so ZenML also calls this grouping a 'Dataset'. The individual meta-unit for 'an annotation + the source data' is referred to in different ways, but at ZenML (and with Label Studio) we refer to them as 'tasks'. The remaining core concepts ('annotation' and 'prediction', in particular) are broadly used among annotation tools. PreviousDevelop a Custom Image Builder NextArgilla Last updated 19 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/annotators
373
Access secrets in a step Fetching secret values in a step ZenML secrets are groupings of key-value pairs which are securely stored in the ZenML secrets store. Additionally, a secret always has a name that allows you to fetch or reference them in your pipelines and stacks. In order to learn more about how to configure and create secrets, please refer to the platform guide on secrets. You can access secrets directly from within your steps through the ZenML Client API. This allows you to use your secrets for querying APIs from within your step without hard-coding your access keys: from zenml import step from zenml.client import Client from somewhere import authenticate_to_some_api @step def secret_loader() -> None: """Load the example secret from the server.""" # Fetch the secret from ZenML. secret = Client().get_secret("<SECRET_NAME>") # `secret.secret_values` will contain a dictionary with all key-value # pairs within your secret. authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ... See Also: Learn how to create and manage secrets Find out more about the secrets backend in ZenML PreviousVersion pipelines NextFetching pipelines Last updated 16 days ago
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/access-secrets-in-a-step
256
w_bucket=gs://my_bucket --provider=<YOUR_PROVIDER>You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section. Authentication Methods You need to configure the following credentials for authentication to a remote MLflow tracking server: tracking_uri: The URL pointing to the MLflow tracking server. If using an MLflow Tracking Server managed by Databricks, then the value of this attribute should be "databricks". tracking_username: Username for authenticating with the MLflow tracking server. tracking_password: Password for authenticating with the MLflow tracking server. tracking_token (in place of tracking_username and tracking_password): Token for authenticating with the MLflow tracking server. tracking_insecure_tls (optional): Set to skip verifying the MLflow tracking server SSL certificate. databricks_host: The host of the Databricks workspace with the MLflow-managed server to connect to. This is only required if the tracking_uri value is set to "databricks". More information: Access the MLflow tracking server from outside Databricks Either tracking_token or tracking_username and tracking_password must be specified. This option configures the credentials for the MLflow tracking service directly as stack component attributes. This is not recommended for production settings as the credentials won't be stored securely and will be clearly visible in the stack configuration. # Register the MLflow experiment tracker zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ --tracking_uri=<URI> --tracking_token=<token> # You can also register it like this: # zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ # --tracking_uri=<URI> --tracking_username=<USERNAME> --tracking_password=<PASSWORD> # Register and set a stack with the new experiment tracker
stack-components
https://docs.zenml.io/stack-components/experiment-trackers/mlflow
400
DB services support this, CloudSQL is an example).Collect information from your secrets management service Using an externally managed secrets management service like those offered by Google Cloud, AWS, Azure or HashiCorp Vault is optional, but is recommended if you are already using those cloud service providers. If omitted, ZenML will default to using the SQL database to store secrets. If you decide to use an external secrets management service, you will need to collect and prepare the following information for the Helm chart configuration (for supported back-ends only): For the AWS secrets manager: the AWS region that you want to use to store your secrets an AWS access key ID and secret access key that provides full access to the AWS secrets manager service. You can create a dedicated IAM user for this purpose, or use an existing user with the necessary permissions. If you deploy the ZenML server in an EKS Kubernetes cluster that is already configured to use implicit authorization with an IAM role for service accounts, you can omit this step. For the Google Cloud secrets manager: the Google Cloud project ID that you want to use to store your secrets a Google Cloud service account that has access to the secrets manager service. You can create a dedicated service account for this purpose, or use an existing service account with the necessary permissions. For the Azure Key Vault: the name of the Azure Key Vault that you want to use to store your secrets the Azure tenant ID, client ID, and client secret associated with the Azure service principal that will be used to access the Azure Key Vault. You can create a dedicated application service principal for this purpose, or use an existing service principal with the necessary permissions. If you deploy the ZenML server in an AKS Kubernetes cluster that is already configured to use implicit authorization through the Azure-managed identity service, you can omit this step. For the HashiCorp Vault: the URL of the HashiCorp Vault server
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm
385
run ZenML steps in remote specialized environmentsAs you transition to a team setting or a production setting, you can replace the local Artifact Store in your stack with one of the other flavors that are better suited for these purposes, with no changes required in your code. How do you deploy it? The default stack that comes pre-configured with ZenML already contains a local Artifact Store: $ zenml stack list Running without an active repository root. Using the default local database. ┏━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━┓ ┃ ACTIVE β”‚ STACK NAME β”‚ ARTIFACT_STORE β”‚ ORCHESTRATOR ┃ ┠────────┼────────────┼────────────────┼──────────────┨ ┃ πŸ‘‰ β”‚ default β”‚ default β”‚ default ┃ ┗━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━┛ $ zenml artifact-store describe Running without an active repository root. Using the default local database. Running with active stack: 'default' No component name given; using `default` from active stack. ARTIFACT_STORE Component Configuration (ACTIVE) ┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ COMPONENT_PROPERTY β”‚ VALUE ┃ ┠────────────────────┼──────────────────────────────────────────────────────────────────────────────┨ ┃ TYPE β”‚ artifact_store ┃ ┠────────────────────┼──────────────────────────────────────────────────────────────────────────────┨ ┃ FLAVOR β”‚ local ┃ ┠────────────────────┼──────────────────────────────────────────────────────────────────────────────┨ ┃ NAME β”‚ default ┃ ┠────────────────────┼──────────────────────────────────────────────────────────────────────────────┨
stack-components
https://docs.zenml.io/v/docs/stack-components/artifact-stores/local
454
Version pipelines Understanding how and when the version of a pipeline is incremented. You might have noticed that when you run a pipeline in ZenML with the same name, but with different steps, it creates a new version of the pipeline. Consider our example pipeline: from zenml import pipeline @pipeline def first_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": first_pipeline() Running this the first time will create a single run for version 1 of the pipeline called first_pipeline. $python run.py ... Registered pipeline first_pipeline (version 1). ... Running it again (python run.py) will create yet another run for version 1 of the pipeline called first_pipeline. So now the same pipeline has two runs. You can also verify this in the dashboard. However, now let's change the pipeline configuration itself. You can do this by modifying the step connections within the @pipeline function or by replacing a concrete step with another one. For example, let's create an alternative step called digits_data_loader which loads a different dataset. import pandas as pd from zenml import step from typing import Tuple from typing_extensions import Annotated @step def digits_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: """Loads the digits dataset and splits it into train and test data.""" # Load data from the digits dataset digits = load_digits(as_frame=True) # Split into datasets X_train, X_test, y_train, y_test = train_test_split( digits.data, digits.target, test_size=0.2, shuffle=True return X_train, X_test, y_train, y_test @pipeline def first_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = digits_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train)
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/version-pipelines
463
(self, path): """Checks if a path exists.""" ...As each component defines a different interface, make sure to check out the base class definition of the component type that you want to implement and also check out the documentation on how to extend specific stack components. If you would like to automatically track some metadata about your custom stack component with each pipeline run, you can do so by defining some additional methods in your stack component implementation class as shown in the Tracking Custom Stack Component Metadata section. See the full code of the base StackComponent class here. Base Abstraction 2: StackComponentConfig As the name suggests, the StackComponentConfig is used to configure a stack component instance. It is separated from the actual implementation on purpose. This way, ZenML can use this class to validate the configuration of a stack component during its registration/update, without having to import heavy (or even non-installed) dependencies. The config and settings of a stack component are two separate, yet related entities. The config is the static part of your flavor's configuration, defined when you register your flavor. The settings are the dynamic part of your flavor's configuration that can be overridden at runtime. You can read more about the differences here. Let us now continue with the base artifact store example from above and take a look at the BaseArtifactStoreConfig: from zenml.stack import StackComponentConfig class BaseArtifactStoreConfig(StackComponentConfig): """Config class for `BaseArtifactStore`.""" path: str SUPPORTED_SCHEMES: ClassVar[Set[str]] ... Through the BaseArtifactStoreConfig, each artifact store will require users to define a path variable. Additionally, the base config requires all artifact store flavors to define a SUPPORTED_SCHEMES class variable that ZenML will use to check if the user-provided path is actually supported by the flavor. See the full code of the base StackComponentConfig class here. Base Abstraction 3: Flavor
how-to
https://docs.zenml.io/v/docs/how-to/stack-deployment/implement-a-custom-stack-component
392
i-fi themed corpus about "ZenML World" corpus = ["The luminescent forests of ZenML World are inhabited by glowing Zenbots that emit a soft, pulsating light as they roam the enchanted landscape.", "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully, their iridescent wings leaving trails of stardust in their wake.", "Telepathic Treants, ancient sentient trees, communicate through the quantum neural network that spans the entire surface of ZenML World, sharing wisdom and knowledge.", "Deep within the melodic caverns of ZenML World, Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds.", "Near the ethereal waterfalls of ZenML World, Holographic Hummingbirds hover effortlessly, their translucent wings refracting the prismatic light into mesmerizing patterns.", "Gravitational Geckos, masters of anti-gravity, traverse the inverted cliffs of ZenML World, defying the laws of physics with their extraordinary abilities.", "Plasma Phoenixes, majestic creatures of pure energy, soar above the chromatic canyons of ZenML World, their fiery trails painting the sky in a dazzling display of colors.", "Along the prismatic shores of ZenML World, Crystalline Crabs scuttle and burrow, their transparent exoskeletons refracting the light into a kaleidoscope of hues.", corpus = [preprocess_text(sentence) for sentence in corpus] question1 = "What are Plasma Phoenixes?" answer1 = answer_question(question1, corpus) print(f"Question: {question1}") print(f"Answer: {answer1}") question2 = ( "What kinds of creatures live on the prismatic shores of ZenML World?" answer2 = answer_question(question2, corpus) print(f"Question: {question2}") print(f"Answer: {answer2}") irrelevant_question_3 = "What is the capital of Panglossia?" answer3 = answer_question(irrelevant_question_3, corpus) print(f"Question: {irrelevant_question_3}") print(f"Answer: {answer3}") This outputs the following: Question: What are Plasma Phoenixes?
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/rag-with-zenml/rag-85-loc
463
β”‚ s3://zenml-demos ┃┃ β”‚ s3://zenml-generative-chat ┃ ┃ β”‚ s3://zenml-public-datasets ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ 🐳 docker-registry β”‚ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ No credentials are stored with the Service Connector: zenml service-connector describe aws-implicit Example Command Output Service connector 'aws-implicit' of type 'aws' with id 'e3853748-34a0-4d78-8006-00422ad32884' is owned by user 'default' and is 'private'. 'aws-implicit' aws Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ ID β”‚ 9a810521-ef41-4e45-bb48-8569c5943dc6 ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ NAME β”‚ aws-implicit ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ TYPE β”‚ πŸ”Ά aws ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ AUTH METHOD β”‚ implicit ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ RESOURCE TYPES β”‚ πŸ”Ά aws-generic, πŸ“¦ s3-bucket, πŸŒ€ kubernetes-cluster, 🐳 docker-registry ┃
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
514
rets: Setting it to NONE disables any validation.Setting it to SECRET_EXISTS only validates the existence of secrets. This might be useful if the machine you're running on only has permission to list secrets but not actually read their values. Setting it to SECRET_AND_KEY_EXISTS (the default) validates both the secret existence as well as the existence of the exact key-value pair. Fetch secret values in a step If you are using centralized secrets management, you can access secrets directly from within your steps through the ZenML Client API. This allows you to use your secrets for querying APIs from within your step without hard-coding your access keys: from zenml import step from zenml.client import Client @step def secret_loader() -> None: """Load the example secret from the server.""" # Fetch the secret from ZenML. secret = Client().get_secret( < SECRET_NAME >) # `secret.secret_values` will contain a dictionary with all key-value # pairs within your secret. authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ... See Also Interact with secrets: Learn how to create, list, and delete secrets using the ZenML CLI and Python SDK. PreviousDeploy a stack using mlstacks NextImplement a custom stack component Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/stack-deployment/reference-secrets-in-stack-configuration
273
N -x mlflow_bucket=gs://my_bucket Artifact StoresFor an artifact store, you can pass bucket_name as an argument to the command. zenml artifact-store deploy s3_artifact_store --flavor=s3 --provider=aws -r YOUR_REGION -x bucket_name=my_bucket Container Registries For container registries, you can pass the repository name using repo_name: zenml container-registry deploy aws_registry --flavor=aws -p aws -r YOUR_REGION -x repo_name=my_repo This is only useful for the AWS case since AWS requires a repository to be created before pushing images to it and the deploy command ensures that a repository with the name you provide is created. In case of GCP and other providers, you can choose the repository name at the same time as you are pushing the image via code. This is achieved through setting the target_repo attribute of the DockerSettings object. Other configuration In the case of GCP components, it is required that you pass a project ID to the command as extra configuration when you're creating any GCP resource. PreviousManage stacks NextDeploy a stack using mlstacks Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/stack-deployment/deploy-a-stack-component
242
er Image Builder stack component, or the Vertex AIOrchestrator and Step Operator. It should be accompanied by a matching set of GCP permissions that allow access to the set of remote resources required by the client and Stack Component. The resource name represents the GCP project that the connector is authorized to access. πŸ“¦ GCP GCS bucket (resource type: gcs-bucket) Authentication methods: implicit, user-account, service-account, oauth2-token, impersonation Supports resource instances: True Authentication methods: πŸ”’ implicit πŸ”’ user-account πŸ”’ service-account πŸ”’ oauth2-token πŸ”’ impersonation Allows Stack Components to connect to GCS buckets. When used by Stack Components, they are provided a pre-configured GCS Python client instance. The configured credentials must have at least the following GCP permissions associated with the GCS buckets that it can access: storage.buckets.list storage.buckets.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list storage.objects.update For example, the GCP Storage Admin role includes all of the required permissions, but it also includes additional permissions that are not required by the connector. If set, the resource name must identify a GCS bucket using one of the following formats: GCS bucket URI: gs://{bucket-name} GCS bucket name: {bucket-name} [...] ──────────────────────────────────────────────────────────────────────────────── Please select a resource type or leave it empty to create a connector that can be used to access any of the supported resource types (gcp-generic, gcs-bucket, kubernetes-cluster, docker-registry). []: gcs-bucket Would you like to attempt auto-configuration to extract the authentication configuration from your local environment ? [y/N]: y Service connector auto-configured successfully with the following configuration: Service connector 'gcp-interactive' of type 'gcp' is 'private'. 'gcp-interactive' gcp Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━┓
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
450
Prodigy Annotating data using Prodigy. Prodigy is a modern annotation tool for creating training and evaluation data for machine learning models. You can also use Prodigy to help you inspect and clean your data, do error analysis and develop rule-based systems to use in combination with your statistical models. Prodigy is a paid annotation tool. You will need a Prodigy is a paid tool. A license is required to download and use it with ZenML. The Prodigy Python library includes a range of pre-built workflows and command-line commands for various tasks, and well-documented components for implementing your own workflow scripts. Your scripts can specify how the data is loaded and saved, change which questions are asked in the annotation interface, and can even define custom HTML and JavaScript to change the behavior of the front-end. The web application is optimized for fast, intuitive and efficient annotation. When would you want to use it? If you need to label data as part of your ML workflow, that is the point at which you could consider adding the optional annotator stack component as part of your ZenML stack. How to deploy it? The Prodigy Annotator flavor is provided by the Prodigy ZenML integration. You need to install it to be able to register it as an Annotator and add it to your stack: zenml integration export-requirements --output-file prodigy-requirements.txt prodigy Note that you'll need to install Prodigy separately since it requires a license. Please visit the Prodigy docs for information on how to install it. Currently Prodigy also requires the urllib3<2 dependency, so make sure to install that. Then register your annotator with ZenML: zenml annotator register prodigy --flavor prodigy # optionally also pass in --custom_config_path="<PATH_TO_CUSTOM_CONFIG_FILE>" See https://prodi.gy/docs/install#config for more on custom Prodigy config files. Passing a custom_config_path allows you to override the default Prodigy config.
stack-components
https://docs.zenml.io/v/docs/stack-components/annotators/prodigy
418
-vm --type aws --region=us-east-1 --auto-configureThis will automatically configure the service connector with the appropriate credentials and permissions to provision VMs on AWS. You can then use the service connector to configure your registered VM Orchestrator stack component using the following command: # Register the orchestrator zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_aws # Connect the orchestrator to the service connector zenml orchestrator connect <ORCHESTRATOR_NAME> --connector aws-skypilot-vm # Register and activate a stack with the new orchestrator zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set We need first to install the SkyPilot integration for GCP and the GCP extra for ZenML, using the following two commands: pip install "zenml[connectors-gcp]" zenml integration install gcp skypilot_gcp To provision VMs on GCP, your VM Orchestrator stack component needs to be configured to authenticate with GCP Service Connector To configure the GCP Service Connector, you need to register a new service connector, but first let's check the available service connectors types using the following command: zenml service-connector list-types --type gcp ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓ ┃ NAME β”‚ TYPE β”‚ RESOURCE TYPES β”‚ AUTH METHODS β”‚ LOCAL β”‚ REMOTE ┃ ┠───────────────────────┼────────┼───────────────────────┼─────────────────┼───────┼────────┨ ┃ GCP Service Connector β”‚ πŸ”΅ gcp β”‚ πŸ”΅ gcp-generic β”‚ implicit β”‚ βœ… β”‚ βž– ┃ ┃ β”‚ β”‚ πŸ“¦ gcs-bucket β”‚ user-account β”‚ β”‚ ┃ ┃ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ service-account β”‚ β”‚ ┃ ┃ β”‚ β”‚ 🐳 docker-registry β”‚ oauth2-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ impersonation β”‚ β”‚ ┃
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/skypilot-vm
507
ased key, create the service connector as follows:zenml service-connector register <SERVICE_CONNECTOR_NAME> --type=hyperai --auth-method=rsa-key --base64_ssh_key=<BASE64_SSH_KEY> --hostnames=<INSTANCE_1>,<INSTANCE_2>,..,<INSTANCE_N> --username=<INSTANCE_USERNAME> Hostnames are either DNS resolvable names or IP addresses. For example, if you have two servers - one at 1.2.3.4 and another at 4.3.2.1, you could provide them as --hostnames=1.2.3.4,4.3.2.1. Optionally, it is possible to provide a passphrase for the key (--ssh_passphrase). Following registering the service connector, we can register the orchestrator and use it in our active stack: zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=hyperai # Register and activate a stack with the new orchestrator zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set You can now run any ZenML pipeline using the HyperAI orchestrator: python file_that_runs_a_zenml_pipeline.py Enabling CUDA for GPU-backed hardware Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. PreviousSkypilot VM Orchestrator NextDevelop a custom orchestrator Last updated 19 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/hyperai
338
e, onerror: Optional[Callable[..., None]] = None,) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]: """Return an iterator that walks the contents of the given directory.""" class BaseArtifactStoreFlavor(Flavor): """Base class for artifact store flavors.""" @property @abstractmethod def name(self) -> Type["BaseArtifactStore"]: """Returns the name of the flavor.""" @property def type(self) -> StackComponentType: """Returns the flavor type.""" return StackComponentType.ARTIFACT_STORE @property def config_class(self) -> Type[StackComponentConfig]: """Config class.""" return BaseArtifactStoreConfig @property @abstractmethod def implementation_class(self) -> Type["BaseArtifactStore"]: """Implementation class.""" This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. In order to see the full implementation and get the complete docstrings, please check the SDK docs . The effect on the zenml.io.fileio If you created an instance of an artifact store, added it to your stack, and activated the stack, it will create a filesystem each time you run a ZenML pipeline and make it available to the zenml.io.fileio module. This means that when you utilize a method such as fileio.open(...) with a file path that starts with one of the SUPPORTED_SCHEMES within your steps or materializers, it will be able to use the open(...) method that you defined within your artifact store. Build your own custom artifact store If you want to implement your own custom Artifact Store, you can follow the following steps: Create a class that inherits from the BaseArtifactStore class and implements the abstract methods. Create a class that inherits from the BaseArtifactStoreConfig class and fill in the SUPPORTED_SCHEMES based on your file system. Bring both of these classes together by inheriting from the BaseArtifactStoreFlavor class.
stack-components
https://docs.zenml.io/v/docs/stack-components/artifact-stores/custom
398
guration set up by the GCP CLI on your local host.The following is an example of lifting GCP user credentials granting access to the same set of GCP resources and services that the local GCP CLI is allowed to access. The GCP CLI should already be configured with valid credentials (i.e. by running gcloud auth application-default login). In this case, the GCP user account authentication method is automatically detected: zenml service-connector register gcp-auto --type gcp --auto-configure Example Command Output Successfully registered service connector `gcp-auto` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼─────────────────────────────────────────────────┨ ┃ πŸ”΅ gcp-generic β”‚ zenml-core ┃ ┠───────────────────────┼─────────────────────────────────────────────────┨ ┃ πŸ“¦ gcs-bucket β”‚ gs://zenml-bucket-sl ┃ ┃ β”‚ gs://zenml-core.appspot.com ┃ ┃ β”‚ gs://zenml-core_cloudbuild ┃ ┃ β”‚ gs://zenml-datasets ┃ ┃ β”‚ gs://zenml-internal-artifact-store ┃ ┃ β”‚ gs://zenml-kubeflow-artifact-store ┃ ┃ β”‚ gs://zenml-project-time-series-bucket ┃ ┠───────────────────────┼─────────────────────────────────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenml-test-cluster ┃ ┠───────────────────────┼─────────────────────────────────────────────────┨ ┃ 🐳 docker-registry β”‚ gcr.io/zenml-core ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ zenml service-connector describe gcp-auto Example Command Output
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector
491
─────────────────────────────────────────────────┨┃ ID β”‚ 37b6000e-3f7f-483e-b2c5-7a5db44fe66b ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ NAME β”‚ gcp-workload-identity ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ TYPE β”‚ πŸ”΅ gcp ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ AUTH METHOD β”‚ external-account ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ RESOURCE TYPES β”‚ πŸ”΅ gcp-generic, πŸ“¦ gcs-bucket, πŸŒ€ kubernetes-cluster, 🐳 docker-registry ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ RESOURCE NAME β”‚ <multiple> ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ SECRET ID β”‚ 1ff6557f-7f60-4e63-b73d-650e64f015b5 ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ SESSION DURATION β”‚ N/A ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ EXPIRES IN β”‚ N/A ┃ ┠────────────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ EXPIRES_SKEW_TOLERANCE β”‚ N/A ┃
how-to
https://docs.zenml.io/how-to/auth-management/gcp-service-connector
392
s': '5', 'optimizer': 'Adam', 'lr': '0.001'} ┃┗━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ For more details on a specific model version, you can use the zenml model-registry models get-version REGISTERED_MODEL_NAME -v VERSION command: $ zenml model-registry models get-version tensorflow-mnist-model -v 1 ┏━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ MODEL VERSION PROPERTY β”‚ VALUE ┃ ┠────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ REGISTERED_MODEL_NAME β”‚ tensorflow-mnist-model ┃ ┠────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨
stack-components
https://docs.zenml.io/v/docs/stack-components/model-registries/mlflow
462
BentoML Deploying your models locally with BentoML. BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment. The BentoML Model Deployer is one of the available flavors of the Model Deployer stack component. Provided with the BentoML integration it can be used to deploy and manage BentoML models or Bento on a local running HTTP server. The BentoML Model Deployer can be used to deploy models for local development and production use cases. While the integration mainly works in a local environment where pipelines are run, the used Bento can be exported and containerized, and deployed in a remote environment. Within the BentoML ecosystem, Yatai and bentoctl are the tools responsible for deploying the Bentos into the Kubernetes cluster and Cloud Platforms. Full support for these advanced tools is in progress and will be available soon. When to use it? You should use the BentoML Model Deployer to: Standardize the way you deploy your models to production within your organization. if you are looking to deploy your models in a simple way, while you are still able to transform your model into a production-ready solution when that time comes. If you are looking to deploy your models with other Kubernetes-based solutions, you can take a look at one of the other Model Deployer Flavors available in ZenML. BentoML also allows you to deploy your models in a more complex production-grade setting. Bentoctl is one of the tools that can help you get there. Bentoctl takes your built Bento from a ZenML pipeline and deploys it with bentoctl into a cloud environment such as AWS Lambda, AWS SageMaker, Google Cloud Functions, Google Cloud AI Platform, or Azure Functions. Read more about this in the From Local to Cloud with bentoctl section.
stack-components
https://docs.zenml.io/v/docs/stack-components/model-deployers/bentoml
390
our pipeline runs, such as the logs of your steps.To access the Sagemaker Pipelines UI, you will have to launch Sagemaker Studio via the AWS Sagemaker UI. Make sure that you are launching it from within your desired AWS region. Once the Studio UI has launched, click on the 'Pipeline' button on the left side. From there you can view the pipelines that have been launched via ZenML: Debugging SageMaker Pipelines If your SageMaker pipeline encounters an error before the first ZenML step starts, the ZenML run will not appear in the ZenML dashboard. In such cases, use the SageMaker UI to review the error message and logs. Here's how: Open the corresponding pipeline in the SageMaker UI as shown in the SageMaker UI Section, Open the execution, Click on the failed step in the pipeline graph, Go to the 'Output' tab to see the error message or to 'Logs' to see the logs. Alternatively, for a more detailed view of log messages during SageMaker pipeline executions, consider using Amazon CloudWatch: Search for 'CloudWatch' in the AWS console search bar. Navigate to 'Logs > Log groups.' Open the '/aws/sagemaker/ProcessingJobs' log group. Here, you can find log streams for each step of your SageMaker pipeline executions. Run pipelines on a schedule The ZenML Sagemaker orchestrator doesn't currently support running pipelines on a schedule. We maintain a public roadmap for ZenML, which you can find here. We welcome community contributions (see more here) so if you want to enable scheduling for Sagemaker, please do let us know! Configuration at pipeline or step level
stack-components
https://docs.zenml.io/stack-components/orchestrators/sagemaker
343
gration page. Some of the other examples used are:Question URL Ending How do I get going with the Label Studio integration? What are the first steps? stacks-and-components/component-guide/annotators/label-studio How can I write my own custom materializer? user-guide/advanced-guide/data-management/handle-custom-data-types How do I generate embeddings as part of a RAG pipeline when using ZenML? user-guide/llmops-guide/rag-with-zenml/embeddings-generation How do I use failure hooks in my ZenML pipeline? user-guide/advanced-guide/pipelining-features/use-failure-success-hooks Can I deploy ZenML self-hosted with Helm? How do I do it? deploying-zenml/zenml-self-hosted/deploy-with-helm For the retrieval pipeline, all we have to do is encode the query as a vector and then query the PostgreSQL database for the most similar vectors. We then check whether the URL for the document we thought must show up is actually present in the top n results. def query_similar_docs(question: str, url_ending: str) -> tuple: embedded_question = get_embeddings(question) db_conn = get_db_conn() top_similar_docs_urls = get_topn_similar_docs( embedded_question, db_conn, n=5, only_urls=True urls = [url[0] for url in top_similar_docs_urls] # Unpacking URLs from tuples return (question, url_ending, urls) def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: total_tests = len(question_doc_pairs) failures = 0 for pair in question_doc_pairs: question, url_ending, urls = query_similar_docs( pair["question"], pair["url_ending"] if all(url_ending not in url for url in urls): logging.error( f"Failed for question: {question}. Expected URL ending: {url_ending}. Got: {urls}" failures += 1 logging.info(f"Total tests: {total_tests}. Failures: {failures}") failure_rate = (failures / total_tests) * 100 return round(failure_rate, 2) We include some logging so that when running the pipeline locally we can get some immediate feedback logged to the console.
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/retrieval
475
Great Expectations How to use Great Expectations to run data quality checks in your pipelines and document the results The Great Expectations Data Validator flavor provided with the ZenML integration uses Great Expectations to run data profiling and data quality tests on the data circulated through your pipelines. The test results can be used to implement automated corrective actions in your pipelines. They are also automatically rendered into documentation for further visual interpretation and evaluation. When would you want to use it? Great Expectations is an open-source library that helps keep the quality of your data in check through data testing, documentation, and profiling, and to improve communication and observability. Great Expectations works with tabular data in a variety of formats and data sources, of which ZenML currently supports only pandas.DataFrame as part of its pipelines. You should use the Great Expectations Data Validator when you need the following data validation features that are possible with Great Expectations: Data Profiling: generates a set of validation rules (Expectations) automatically by inferring them from the properties of an input dataset. Data Quality: runs a set of predefined or inferred validation rules (Expectations) against an in-memory dataset. Data Docs: generate and maintain human-readable documentation of all your data validation rules, data quality checks and their results. You should consider one of the other Data Validator flavors if you need a different set of data validation features. How do you deploy it? The Great Expectations Data Validator flavor is included in the Great Expectations ZenML integration, you need to install it on your local machine to be able to register a Great Expectations Data Validator and add it to your stack: zenml integration install great_expectations -y
stack-components
https://docs.zenml.io/stack-components/data-validators/great-expectations
338
ect URL' (see above). Extra configuration optionsBy default, the ZenML application will be configured to use an SQLite non-persistent database. If you want to use a persistent database, you can configure this by amending the Dockerfile to your Space's root directory. For full details on the various parameters you can change, see our reference documentation on configuring ZenML when deployed with Docker. If you are using the space just for testing and experimentation, you don't need to make any changes to the configuration. Everything will work out of the box. You can also use an external secrets backend together with your HuggingFace Spaces as described in our documentation. You should be sure to use HuggingFace's inbuilt ' Repository secrets' functionality to configure any secrets you need to use in yourDockerfile configuration. See the documentation for more details on how to set this up. If you wish to use a cloud secrets backend together with ZenML for secrets management, you must update your password on your ZenML Server on the Dashboard. This is because the default user created by the HuggingFace Spaces deployment process has no password assigned to it and as the Space is publicly accessible (since the Space is public) potentially anyone could access your secrets without this extra step. To change your password navigate to the Settings page by clicking the button in the upper right-hand corner of the Dashboard and then click 'Update Password'. Troubleshooting If you are having trouble with your ZenML server on HuggingFace Spaces, you can view the logs by clicking on the "Open Logs" button at the top of the space. This will give you more context of what's happening with your server. If you have any other issues, please feel free to reach out to us on our Slack channel for more support. Upgrading your ZenML Server on HF Spaces
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-using-huggingface-spaces
370
πŸ”Interact with secrets Managing your secrets with ZenML. How to create a secret To create a secret with a name <SECRET_NAME> and a key-value pair, you can run the following CLI command: zenml secret create <SECRET_NAME> \ --<KEY_1>=<VALUE_1> \ --<KEY_2>=<VALUE_2> # Another option is to use the '--values' option and provide key-value pairs in either JSON or YAML format. zenml secret create <SECRET_NAME> \ --values='{"key1":"value2","key2":"value2"}' Alternatively, you can create the secret in an interactive session (in which ZenML will query you for the secret keys and values) by passing the --interactive/-i parameter: zenml secret create <SECRET_NAME> -i For secret values that are too big to pass as a command line argument, or have special characters, you can also use the special @ syntax to indicate to ZenML that the value needs to be read from a file: zenml secret create <SECRET_NAME> \ --key=@path/to/file.txt \ ... # Alternatively, you can utilize the '--values' option by specifying a file path containing key-value pairs in either JSON or YAML format. zenml secret create <SECRET_NAME> \ --values=@path/to/file.txt The CLI also includes commands that can be used to list, update and delete secrets. A full guide on using the CLI to create, access, update and delete secrets is available here. Interactively register missing secrets for your stack If you're using components with secret references in your stack, you need to make sure that all the referenced secrets exist. To make this process easier, you can use the following CLI command to interactively register all secrets for a stack: zenml stack register-secrets [<STACK_NAME>] The ZenML client API offers a programmatic interface to create, e.g.: from zenml.client import Client client = Client() client.create_secret( name="my_secret", values={ "username": "admin", "password": "abc123"
how-to
https://docs.zenml.io/how-to/interact-with-secrets
436
Spark Executing individual steps on Spark The spark integration brings two different step operators: Step Operator: The SparkStepOperator serves as the base class for all the Spark-related step operators. Step Operator: The KubernetesSparkStepOperator is responsible for launching ZenML steps as Spark applications with Kubernetes as a cluster manager. Step Operators: SparkStepOperator A summarized version of the implementation can be summarized in two parts. First, the configuration: from typing import Optional, Dict, Any from zenml.step_operators import BaseStepOperatorConfig class SparkStepOperatorConfig(BaseStepOperatorConfig): """Spark step operator config. Attributes: master: is the master URL for the cluster. You might see different schemes for different cluster managers which are supported by Spark like Mesos, YARN, or Kubernetes. Within the context of this PR, the implementation supports Kubernetes as a cluster manager. deploy_mode: can either be 'cluster' (default) or 'client' and it decides where the driver node of the application will run. submit_kwargs: is the JSON string of a dict, which will be used to define additional params if required (Spark has quite a lot of different parameters, so including them, all in the step operator was not implemented). """ master: str deploy_mode: str = "cluster" submit_kwargs: Optional[Dict[str, Any]] = None and then the implementation: from typing import List from pyspark.conf import SparkConf from zenml.step_operators import BaseStepOperator class SparkStepOperator(BaseStepOperator): """Base class for all Spark-related step operators.""" def _resource_configuration( self, spark_config: SparkConf, resource_configuration: "ResourceSettings", ) -> None: """Configures Spark to handle the resource configuration.""" def _backend_configuration( self, spark_config: SparkConf, step_config: "StepConfiguration", ) -> None:
stack-components
https://docs.zenml.io/stack-components/step-operators/spark-kubernetes
388
ate this documentation as we develop this feature.Getting features from a registered and active feature store is possible by creating your own step that interfaces into the feature store: from datetime import datetime from typing import Any, Dict, List, Union import pandas as pd from zenml import step from zenml.client import Client @step def get_historical_features( entity_dict: Union[Dict[str, Any], str], features: List[str], full_feature_names: bool = False ) -> pd.DataFrame: """Feast Feature Store historical data step Returns: The historical features as a DataFrame. """ feature_store = Client().active_stack.feature_store if not feature_store: raise DoesNotExistException( "The Feast feature store component is not available. " "Please make sure that the Feast stack component is registered as part of your current active stack." params.entity_dict["event_timestamp"] = [ datetime.fromisoformat(val) for val in entity_dict["event_timestamp"] entity_df = pd.DataFrame.from_dict(entity_dict) return feature_store.get_historical_features( entity_df=entity_df, features=features, full_feature_names=full_feature_names, entity_dict = { "driver_id": [1001, 1002, 1003], "label_driver_reported_satisfaction": [1, 5, 3], "event_timestamp": [ datetime(2021, 4, 12, 10, 59, 42).isoformat(), datetime(2021, 4, 12, 8, 12, 10).isoformat(), datetime(2021, 4, 12, 16, 40, 26).isoformat(), ], "val_to_add": [1, 2, 3], "val_to_add_2": [10, 20, 30], features = [ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", "driver_hourly_stats:avg_daily_trips", "transformed_conv_rate:conv_rate_plus_val1", "transformed_conv_rate:conv_rate_plus_val2", @pipeline def my_pipeline(): my_features = get_historical_features(entity_dict, features) ...
stack-components
https://docs.zenml.io/stack-components/feature-stores/feast
450
ister custom_stack -dv ge_data_validator ... --setYou can continue to edit your local Great Expectations configuration (e.g. add new Data Sources, update the Metadata Stores etc.) and these changes will be visible in your ZenML pipelines. You can also use the Great Expectations CLI as usual to manage your configuration and your Expectations. This deployment method migrates your existing Great Expectations configuration to ZenML and allows you to use it with local as well as remote orchestrators. You have to load the Great Expectations configuration contents in one of the Data Validator configuration parameters using the @ operator, e.g.: # Register the Great Expectations data validator zenml data-validator register ge_data_validator --flavor=great_expectations \ --context_config=@/path/to/my/great_expectations/great_expectations.yaml # Register and set a stack with the new data validator zenml stack register custom_stack -dv ge_data_validator ... --set When you are migrating your existing Great Expectations configuration to ZenML, keep in mind that the Metadata Stores that you configured there will also need to be accessible from the location where pipelines are running. For example, you cannot use a non-local orchestrator with a Great Expectations Metadata Store that is located on your filesystem. Advanced Configuration The Great Expectations Data Validator has a few advanced configuration attributes that might be useful for your particular use-case:
stack-components
https://docs.zenml.io/stack-components/data-validators/great-expectations
284
spend most of your time defining these two things.Even though pipelines are simple Python functions, you are only allowed to call steps within this function. The inputs for steps called within a pipeline can either be the outputs of previous steps or alternatively, you can pass in values directly (as long as they're JSON-serializable). @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) Executing the Pipeline is as easy as calling the function that you decorated with the @pipeline decorator. if __name__ == "__main__": my_pipeline() Artifacts Artifacts represent the data that goes through your steps as inputs and outputs and they are automatically tracked and stored by ZenML in the artifact store. They are produced by and circulated among steps whenever your step returns an object or a value. This means the data is not passed between steps in memory. Rather, when the execution of a step is completed they are written to storage, and when a new step gets executed they are loaded from storage. The serialization and deserialization logic of artifacts is defined by Materializers. Models Models are used to represent the outputs of a training process along with all metadata associated with that output. In other words: models in ZenML are more broadly defined as the weights as well as any associated information. Models are first-class citizens in ZenML and as such viewing and using them is unified and centralized in the ZenML API, client as well as on the ZenML Pro dashboard. Materializers Materializers define how artifacts live in between steps. More precisely, they define how data of a particular type can be serialized/deserialized, so that the steps are able to load the input data and store the output data.
getting-started
https://docs.zenml.io/getting-started/core-concepts
359
to load the input data and store the output data.All materializers use the base abstraction called the BaseMaterializer class. While ZenML comes built-in with various implementations of materializers for different datatypes, if you are using a library or a tool that doesn't work with our built-in options, you can write your own custom materializer to ensure that your data can be passed from step to step. Parameters & Settings When we think about steps as functions, we know they receive input in the form of artifacts. We also know that they produce output (in the form of artifacts, stored in the artifact store). But steps also take parameters. The parameters that you pass into the steps are also (helpfully!) stored by ZenML. This helps freeze the iterations of your experimentation workflow in time, so you can return to them exactly as you run them. On top of the parameters that you provide for your steps, you can also use different Settings to configure runtime configurations for your infrastructure and pipelines. Model and model versions ZenML exposes the concept of a Model, which consists of multiple different model versions. A model version represents a unified view of the ML models that are created, tracked, and managed as part of a ZenML project. Model versions link all other entities to a centralized view. 2. Execution Once you have implemented your workflow by using the concepts described above, you can focus your attention on the execution of the pipeline run. Stacks & Components When you want to execute a pipeline run with ZenML, Stacks come into play. A Stack is a collection of stack components, where each component represents the respective configuration regarding a particular function in your MLOps pipeline such as orchestration systems, artifact repositories, and model deployment platforms. For instance, if you take a close look at the default local stack of ZenML, you will see two components that are required in every stack in ZenML, namely an orchestrator and an artifact store.
getting-started
https://docs.zenml.io/v/docs/getting-started/core-concepts
396
Which files are built into the image ZenML determines the root directory of your source files in the following order: If you've initialized zenml (zenml init), the repository root directory will be used. Otherwise, the parent directory of the Python file you're executing will be the source root. For example, running python /path/to/file.py, the source root would be /path/to. You can specify how these files are handled using the source_files attribute on the DockerSettings: The default behavior download_or_include: The files will be downloaded if they're inside a registered code repository and the repository has no local changes, otherwise, they will be included in the image. If you want your files to be included in the image in any case, set the source_files attribute to include. If you want your files to be downloaded in any case, set the source_files attribute to download. If this is specified, the files must be inside a registered code repository and the repository must have no local changes, otherwise the Docker build will fail. If you want to prevent ZenML from copying or downloading any of your source files, you can do so by setting the source_files attribute on the Docker settings to ignore. This is an advanced feature and will most likely cause unintended and unanticipated behavior when running your pipelines. If you use this, make sure to copy all the necessary files to the correct paths yourself. Which files get included When including files in the image, ZenML copies all contents of the root directory into the Docker image. To exclude files and keep the image smaller, use a .dockerignore file in either of the following ways: Have a file called .dockerignore in your source root directory. Explicitly specify a .dockerignore file to use:Copydocker_settings = DockerSettings(dockerignore="/path/to/.dockerignore") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... PreviousUse your own Dockerfiles
how-to
https://docs.zenml.io/how-to/customize-docker-builds/which-files-are-built-into-the-image
392
ow_secret \ --username=admin \ --password=abc123# Then reference the username and password in our experiment tracker component zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... When using secret references in your stack, ZenML will validate that all secrets and keys referenced in your stack components exist before running a pipeline. This helps us fail early so your pipeline doesn't fail after running for some time due to some missing secret. This validation by default needs to fetch and read every secret to make sure that both the secret and the specified key-value pair exist. This can take quite some time and might fail if you don't have permission to read secrets. You can use the environment variable ZENML_SECRET_VALIDATION_LEVEL to disable or control the degree to which ZenML validates your secrets: Setting it to NONE disables any validation. Setting it to SECRET_EXISTS only validates the existence of secrets. This might be useful if the machine you're running on only has permission to list secrets but not actually read their values. Setting it to SECRET_AND_KEY_EXISTS (the default) validates both the secret existence as well as the existence of the exact key-value pair. Fetch secret values in a step If you are using centralized secrets management, you can access secrets directly from within your steps through the ZenML Client API. This allows you to use your secrets for querying APIs from within your step without hard-coding your access keys: from zenml import step from zenml.client import Client @step def secret_loader() -> None: """Load the example secret from the server.""" # Fetch the secret from ZenML. secret = Client().get_secret( < SECRET_NAME >) # `secret.secret_values` will contain a dictionary with all key-value # pairs within your secret. authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ...
how-to
https://docs.zenml.io/v/docs/how-to/interact-with-secrets
406
━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛ ```Register and connect an EC GCP Container Registry Stack Component to an ECR container registry:Copyzenml container-registry register ecr-us-east-1 --flavor aws --uri=715803424590.dkr.ecr.us-east-1.amazonaws.com Example Command Output ```text Running with active workspace: 'default' (repository) Running with active stack: 'default' (repository) Successfully registered container_registry `ecr-us-east-1`. ``` ```sh zenml container-registry connect ecr-us-east-1 --connector aws-demo-multi ``` Example Command Output ```text Running with active workspace: 'default' (repository) Running with active stack: 'default' (repository) Successfully connected container registry `ecr-us-east-1` to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼────────────────┼────────────────┼────────────────────┼──────────────────────────────────────────────┨ ┃ bf073e06-28ce-4a4a-8100-32e7cb99dced β”‚ aws-demo-multi β”‚ πŸ”Ά aws β”‚ 🐳 docker-registry β”‚ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ``` Combine all Stack Components together into a Stack and set it as active (also throw in a local Image Builder for completion):Copyzenml image-builder register local --flavor local Example Command Output ```text Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully registered image_builder `local`. ``` ```sh zenml stack register aws-demo -a s3-zenfiles -o eks-zenml-zenhacks -c ecr-us-east-1 -i local --set ``` Example Command Output ```text
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
600
accessed via `self.config`. step_settings = cast(MyOrchestratorSettings, self.get_settings(step) # If your orchestrator supports setting resources like CPUs, GPUs or # memory for the pipeline or specific steps, you can find out whether # specific resources were specified for this step: if self.requires_resources_in_orchestration_environment(step): resources = step.config.resource_settings To see a full end-to-end worked example of a custom orchestrator, see here. Enabling CUDA for GPU-backed hardware Note that if you wish to use your custom orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. PreviousHyperAI Orchestrator NextArtifact Stores Last updated 19 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/custom
178
Step output typing and annotation Step outputs are stored in your artifact store. Annotate and name them to make more explicit. Type annotations Your functions will work as ZenML steps even if you don't provide any type annotations for their inputs and outputs. However, adding type annotations to your step functions gives you lots of additional benefits: Type validation of your step inputs: ZenML makes sure that your step functions receive an object of the correct type from the upstream steps in your pipeline. Better serialization: Without type annotations, ZenML uses Cloudpickle to serialize your step outputs. When provided with type annotations, ZenML can choose a materializer that is best suited for the output. In case none of the builtin materializers work, you can even write a custom materializer. ZenML provides a built-in CloudpickleMaterializer that can handle any object by saving it with cloudpickle. However, this is not production-ready because the resulting artifacts cannot be loaded when running with a different Python version. In such cases, you should consider building a custom Materializer to save your objects in a more robust and efficient format. Moreover, using the CloudpickleMaterializer could allow users to upload of any kind of object. This could be exploited to upload a malicious file, which could execute arbitrary code on the vulnerable system. from typing import Tuple from zenml import step @step def square_root(number: int) -> float: return number ** 0.5 # To define a step with multiple outputs, use a `Tuple` type annotation @step def divide(a: int, b: int) -> Tuple[int, int]: return a // b, a % b If you want to make sure you get all the benefits of type annotating your steps, you can set the environment variable ZENML_ENFORCE_TYPE_ANNOTATIONS to True. ZenML will then raise an exception in case one of the steps you're trying to run is missing a type annotation. Tuple vs multiple outputs
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/step-output-typing-and-annotation
404
allows you to override the default Prodigy config.Finally, add all these components to a stack and set it as your active stack. For example: zenml stack copy default annotation zenml stack update annotation -an prodigy zenml stack set annotation # optionally also zenml stack describe Now if you run a simple CLI command like zenml annotator dataset list this should work without any errors. You're ready to use your annotator in your ML workflow! How do you use it? With Prodigy, there is no need to specially start the annotator ahead of time like with Label Studio. Instead, just use Prodigy as per the Prodigy docs and then you can use the ZenML wrapper / API to get your labelled data etc using our Python methods. ZenML supports access to your data and annotations via the zenml annotator ... CLI command. You can access information about the datasets you're using with the zenml annotator dataset list. To work on annotation for a particular dataset, you can run zenml annotator dataset annotate <DATASET_NAME> <CUSTOM_COMMAND>. This is the equivalent of running prodigy <CUSTOM_COMMAND> in the terminal. For example, you might run: zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" A common workflow for Prodigy is to annotate data as you would usually do, and then use the connection into ZenML to import those annotations within a step in your pipeline (if running locally). For example, within a ZenML step: from typing import List, Dict, Any from zenml import step from zenml.client import Client @step def import_annotations() -> List[Dict[str, Any]: zenml_client = Client() annotations = zenml_client.active_stack.annotator.get_labeled_data(dataset_name="my_dataset") # Do something with the annotations return annotations
stack-components
https://docs.zenml.io/stack-components/annotators/prodigy
404
thon file_that_runs_a_zenml_pipeline.py Tekton UITekton comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps. To find the Tekton UI endpoint, we can use the following command: kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}' Additional configuration For additional configuration of the Tekton orchestrator, you can pass TektonOrchestratorSettings which allows you to configure node selectors, affinity, and tolerations to apply to the Kubernetes Pods running your pipeline. These can be either specified using the Kubernetes model objects or as dictionaries. from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings from kubernetes.client.models import V1Toleration tekton_settings = TektonOrchestratorSettings( pod_settings={ "affinity": { "nodeAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": { "nodeSelectorTerms": [ "matchExpressions": [ "key": "node.kubernetes.io/name", "operator": "In", "values": ["my_powerful_node_group"], }, "tolerations": [ V1Toleration( key="node.kubernetes.io/name", operator="Equal", value="", effect="NoSchedule" If your pipelines steps have certain hardware requirements, you can specify them as ResourceSettings: resource_settings = ResourceSettings(cpu_count=8, memory="16GB") These settings can then be specified on either pipeline-level or step-level: # Either specify on pipeline-level @pipeline( settings={ "orchestrator.tekton": tekton_settings, "resources": resource_settings, def my_pipeline(): ... # OR specify settings on step-level @step( settings={ "orchestrator.tekton": tekton_settings, "resources": resource_settings, def my_step(): ... Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings. For more information and a full list of configurable attributes of the Tekton orchestrator, check out the SDK Docs .
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/tekton
462
nRegExpMetric", columns=["Review_Text", "Title"],reg_exp=r"[A-Z][A-Za-z0-9 ]*", ), ], column_mapping = ColumnMapping( target="Rating", numerical_features=["Age", "Positive_Feedback_Count"], categorical_features=[ "Division_Name", "Department_Name", "Class_Name", ], text_features=["Review_Text", "Title"], ), download_nltk_data = True, # post-processing (e.g. interpret results, take actions) can happen here return report.json(), HTMLString(report.show(mode="inline").data) @step def data_validation( reference_dataset: pd.DataFrame, comparison_dataset: pd.DataFrame, ) -> Tuple[ Annotated[str, "test_json"], Annotated[HTMLString, "test_html"] ]: """Custom data validation step with Evidently. Args: reference_dataset: a Pandas DataFrame comparison_dataset: a Pandas DataFrame of new data you wish to compare against the reference data Returns: The Evidently test suite results rendered in JSON and HTML formats. """ # pre-processing (e.g. dataset preparation) can take place here data_validator = EvidentlyDataValidator.get_active_data_validator() test_suite = data_validator.data_validation( dataset=reference_dataset, comparison_dataset=comparison_dataset, check_list=[ EvidentlyTestConfig.test("DataQualityTestPreset"), EvidentlyTestConfig.test_generator( "TestColumnRegExp", columns=["Review_Text", "Title"], reg_exp=r"[A-Z][A-Za-z0-9 ]*", ), ], column_mapping = ColumnMapping( target="Rating", numerical_features=["Age", "Positive_Feedback_Count"], categorical_features=[ "Division_Name", "Department_Name", "Class_Name", ], text_features=["Review_Text", "Title"], ), download_nltk_data = True, # post-processing (e.g. interpret results, take actions) can happen here return test_suite.json(), HTMLString(test_suite.show(mode="inline").data) Have a look at the complete list of methods and parameters available in the EvidentlyDataValidator API in the SDK docs. Call Evidently directly You can use the Evidently library directly in your custom pipeline steps, e.g.:
stack-components
https://docs.zenml.io/stack-components/data-validators/evidently
468
Associate a pipeline with a Model The most common use-case for a Model is to associate it with a pipeline. from zenml import pipeline from zenml.model.model import Model @pipeline( model=Model( name="ClassificationModel", # Give your models unique names tags=["MVP", "Tabular"] # Use tags for future filtering def my_pipeline(): ... This will associate this pipeline with the model specified. In case the model already exists, this will create a new version of that model. In case you want to attach the pipeline to an existing model version, specify this as well. from zenml import pipeline from zenml.model.model import Model from zenml.enums import ModelStages @pipeline( model=Model( name="ClassificationModel", # Give your models unique names tags=["MVP", "Tabular"], # Use tags for future filtering version=ModelStages.LATEST # Alternatively use a stage: [STAGING, PRODUCTION]] def my_pipeline(): ... Feel free to also move the Model configuration into your configuration files: ... model: name: text_classifier description: A breast cancer classifier tags: ["classifier","sgd"] ... PreviousDeleting a Model NextConnecting artifacts via a Model Last updated 15 days ago
how-to
https://docs.zenml.io/how-to/use-the-model-control-plane/associate-a-pipeline-with-a-model
266
─────────────────────────────────────────────────┨┃ OWNER β”‚ default ┃ ┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ ┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-06-20 19:16:26.802374 ┃ ┠──────────────────┼────────────────────────────────────────────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-06-20 19:16:26.802378 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration ┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠───────────────┼──────────────────────────────────────┨ ┃ tenant_id β”‚ a79ff333-8f45-4a74-a42e-68871c17b7fb ┃ ┠───────────────┼──────────────────────────────────────┨ ┃ client_id β”‚ 8926254a-8c3f-430a-a2fd-bdab234d491e ┃ ┠───────────────┼──────────────────────────────────────┨ ┃ client_secret β”‚ [HIDDEN] ┃ ┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Azure Access Token Uses temporary Azure access tokens explicitly configured by the user or auto-configured from a local environment.
how-to
https://docs.zenml.io/how-to/auth-management/azure-service-connector
461
rent types or resources with the same credentials:In working with Service Connectors, the first step is usually finding out what types of resources you can connect ZenML to. Maybe you have already planned out the infrastructure options for your MLOps platform and are looking to find out whether ZenML can accommodate them. Or perhaps you want to use a particular Stack Component flavor in your Stack and are wondering whether you can use a Service Connector to connect it to external resources. Listing the available Service Connector Types will give you a good idea of what you can do with Service Connectors: zenml service-connector list-types Example Command Output ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓ ┃ NAME β”‚ TYPE β”‚ RESOURCE TYPES β”‚ AUTH METHODS β”‚ LOCAL β”‚ REMOTE ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼──────────────────┼───────┼────────┨ ┃ Kubernetes Service Connector β”‚ πŸŒ€ kubernetes β”‚ πŸŒ€ kubernetes-cluster β”‚ password β”‚ βœ… β”‚ βœ… ┃ ┃ β”‚ β”‚ β”‚ token β”‚ β”‚ ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼──────────────────┼───────┼────────┨ ┃ Docker Service Connector β”‚ 🐳 docker β”‚ 🐳 docker-registry β”‚ password β”‚ βœ… β”‚ βœ… ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼──────────────────┼───────┼────────┨ ┃ AWS Service Connector β”‚ πŸ”Ά aws β”‚ πŸ”Ά aws-generic β”‚ implicit β”‚ βœ… β”‚ βœ… ┃ ┃ β”‚ β”‚ πŸ“¦ s3-bucket β”‚ secret-key β”‚ β”‚ ┃ ┃ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ sts-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ 🐳 docker-registry β”‚ iam-role β”‚ β”‚ ┃
how-to
https://docs.zenml.io/v/docs/how-to/auth-management
510
support. Upgrading your ZenML Server on HF SpacesThe default space will use the latest version of ZenML automatically. If you want to update your version, you can simply select the 'Factory reboot' option within the 'Settings' tab of the space. Note that this will wipe any data contained within the space and so if you are not using a MySQL persistent database (as described above) you will lose any data contained within your ZenML deployment on the space. You can also configure the space to use an earlier version by updating the Dockerfile's FROM import statement at the very top. PreviousDeploy with Helm NextDeploy with custom images Last updated 19 days ago
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-using-huggingface-spaces
136
e - sets the secure header to the specified value.The following secure headers environment variables are supported: ZENML_SERVER_SECURE_HEADERS_SERVER*: The Server HTTP header value used to identify the server. The default value is the ZenML server ID. ZENML_SERVER_SECURE_HEADERS_HSTS: The Strict-Transport-Security HTTP header value. The default value is max-age=63072000; includeSubDomains. ZENML_SERVER_SECURE_HEADERS_XFO: The X-Frame-Options HTTP header value. The default value is SAMEORIGIN. ZENML_SERVER_SECURE_HEADERS_XXP: The X-XSS-Protection HTTP header value. The default value is 0. NOTE: this header is deprecated and should not be customized anymore. The Content-Security-Policy header should be used instead. ZENML_SERVER_SECURE_HEADERS_CONTENT: The X-Content-Type-Options HTTP header value. The default value is nosniff. ZENML_SERVER_SECURE_HEADERS_CSP: The Content-Security-Policy HTTP header value. This is by default set to a strict CSP policy that only allows content from the origins required by the ZenML dashboard. NOTE: customizing this header is discouraged, as it may cause the ZenML dashboard to malfunction. ZENML_SERVER_SECURE_HEADERS_REFERRER: The Referrer-Policy HTTP header value. The default value is no-referrer-when-downgrade. ZENML_SERVER_SECURE_HEADERS_CACHE: The Cache-Control HTTP header value. The default value is no-store, no-cache, must-revalidate. ZENML_SERVER_SECURE_HEADERS_PERMISSIONS: The Permissions-Policy HTTP header value. The default value is accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(). If you prefer to activate the server automatically during the initial deployment and also automate the creation of the initial admin user account, this legacy behavior can be brought back by setting the following environment variables: ZENML_SERVER_AUTO_ACTIVATE: Set this to 1 to automatically activate the server and create the initial admin user account when the server is first deployed. Defaults to 0.
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker
435
ocally first. Deploy ZenML with your custom imageNext, adjust your preferred deployment strategy to use the custom Docker image you just built. Deploy a custom ZenML image via CLI You can deploy your custom image via the zenml deploy CLI command by setting the --config argument to a custom configuration file that has both zenmlserver_image_repo and zenmlserver_image_tag set: Define a custom config.yaml based on the base deployment configuration file and set zenmlserver_image_repo and zenmlserver_image_tag according to the custom image you built:Copyzenmlserver_image_repo: <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME> zenmlserver_image_tag: <IMAGE_TAG> Run zenml deploy with the custom config file:Copyzenml deploy --config=/PATH/TO/FILE See the general ZenML CLI Deployment Guide for more information on how to use the zenml deploy CLI command and what other options can be configured. Deploy a custom ZenML image via Docker To deploy your custom image via Docker, first familiarize yourself with the general ZenML Docker Deployment Guide. To use your own image, follow the general guide step by step but replace all mentions of zenmldocker/zenml-server with your custom image reference <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG>. E.g.: To run the ZenML server with Docker based on your custom image, do docker run -it -d -p 8080:8080 --name zenml <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> To use docker-compose, adjust your docker-compose.yml: services: zenml: image: <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> Deploy a custom ZenML image via Helm To deploy your custom image via Helm, first familiarize yourself with the general ZenML Helm Deployment Guide. To use your own image, the only thing you need to do differently is to modify the image section of your values.yaml file: zenml: image: repository: <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME> tag: <IMAGE_TAG> PreviousDeploy using HuggingFace Spaces NextManage deployed services Last updated 15 days ago
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-custom-image
436
strator β”‚ eks_seldon β”‚ aws_secret_manager ┃┠────────┼──────────────────────┼──────────────────────────────────────┼────────┼─────────┼────────────────────┼──────────────────────┼───────────────────────┼────────────────┼────────────────────┨ ┃ πŸ‘‰ β”‚ default β”‚ fe913bb5-e631-4d4e-8c1b-936518190ebb β”‚ β”‚ default β”‚ β”‚ default β”‚ default β”‚ β”‚ ┃ ┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┛ Example of migrating a profile into the default project using a name prefix: $ zenml profile migrate /home/stefan/.config/zenml/profiles/zenbytes --prefix zenbytes_ No component flavors to migrate from /home/stefan/.config/zenml/profiles/zenbytes/stacks.yaml... Migrating stack components from /home/stefan/.config/zenml/profiles/zenbytes/stacks.yaml... Created artifact_store 'zenbytes_s3_store' with flavor 's3'. Created container_registry 'zenbytes_ecr_registry' with flavor 'default'. Created experiment_tracker 'zenbytes_mlflow_tracker' with flavor 'mlflow'. Created experiment_tracker 'zenbytes_mlflow_tracker_local' with flavor 'mlflow'. Created model_deployer 'zenbytes_eks_seldon' with flavor 'seldon'. Created model_deployer 'zenbytes_mlflow' with flavor 'mlflow'. Created orchestrator 'zenbytes_eks_orchestrator' with flavor 'kubeflow'. Created secrets_manager 'zenbytes_aws_secret_manager' with flavor 'aws'. Migrating stacks from /home/stefan/.config/zenml/profiles/zenbytes/stacks.yaml... Created stack 'zenbytes_aws_kubeflow_stack'. Created stack 'zenbytes_local_with_mlflow'. $ zenml stack list Using the default local database. Running with active project: 'default' (global)
reference
https://docs.zenml.io/reference/migration-guide/migration-zero-twenty
532
Connecting remote storage Transitioning to remote artifact storage. In the previous chapters, we've been working with artifacts stored locally on our machines. This setup is fine for individual experiments, but as we move towards a collaborative and production-ready environment, we need a solution that is more robust, shareable, and scalable. Enter remote storage! Remote storage allows us to store our artifacts in the cloud, which means they're accessible from anywhere and by anyone with the right permissions. This is essential for team collaboration and for managing the larger datasets and models that come with production workloads. When using a stack with remote storage, nothing changes except the fact that the artifacts get materialized in a central and remote storage location. This diagram explains the flow: Provisioning and registering a remote artifact store Out of the box, ZenML ships with many different supported artifact store flavors. For convenience, here are some brief instructions on how to quickly get up and running on the major cloud providers: You will need to install and set up the AWS CLI on your machine as a prerequisite, as covered in the AWS CLI documentation, before you register the S3 Artifact Store. The Amazon Web Services S3 Artifact Store flavor is provided by the S3 ZenML integration, you need to install it on your local machine to be able to register an S3 Artifact Store and add it to your stack: zenml integration install s3 -y Having trouble with this command? You can use poetry or pip to install the requirements of any ZenML integration directly. In order to obtain the exact requirements of the AWS S3 integration you can use zenml integration requirements s3. The only configuration parameter mandatory for registering an S3 Artifact Store is the root path URI, which needs to point to an S3 bucket and take the form s3://bucket-name. In order to create a S3 bucket, refer to the AWS documentation.
user-guide
https://docs.zenml.io/user-guide/production-guide/remote-storage
383
AWS-SECRET-ACCESS-KEY>" \ # AWS Secret Access Key.--rclone_config_s3_session_token="" \ # AWS Session Token. --rclone_config_s3_region="" \ # region to connect to. --rclone_config_s3_endpoint="" \ # S3 API endpoint. # Alternatively for providing key-value pairs, you can utilize the '--values' option by specifying a file path containing # key-value pairs in either JSON or YAML format. # File content example: {"rclone_config_s3_type":"s3",...} zenml secret create s3-seldon-secret \ --values=@path/to/file.json Example of configuring a Seldon Core secret for GCS: zenml secret create gs-seldon-secret \ --rclone_config_gs_type="google cloud storage" \ # set to 'google cloud storage' for GCS storage. --rclone_config_gs_client_secret="" \ # OAuth client secret. --rclone_config_gs_token="" \ # OAuth Access Token as a JSON blob. --rclone_config_gs_project_number="" \ # project number. --rclone_config_gs_service_account_credentials="" \ #service account credentials JSON blob. --rclone_config_gs_anonymous=False \ # Access public buckets and objects without credentials. # Set to True if you just want to download files and don't configure credentials. --rclone_config_gs_auth_url="" \ # auth server URL. # Alternatively for providing key-value pairs, you can utilize the '--values' option by specifying a file path containing # key-value pairs in either JSON or YAML format. # File content example: {"rclone_config_gs_type":"google cloud storage",...} zenml secret create gs-seldon-secret \ --values=@path/to/file.json Example of configuring a Seldon Core secret for Azure Blob Storage: zenml secret create az-seldon-secret \ --rclone_config_az_type="azureblob" \ # set to 'azureblob' for Azure Blob Storage. --rclone_config_az_account="" \ # storage Account Name. Leave blank to # use SAS URL or MSI. --rclone_config_az_key="" \ # storage Account Key. Leave blank to # use SAS URL or MSI. --rclone_config_az_sas_url="" \ # SAS URL for container level access # only. Leave blank if using account/key or MSI.
stack-components
https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon
478
πŸ‘€API reference See the ZenML API reference. The ZenML server is a FastAPI application, therefore the OpenAPI-compliant docs are available at /docs or /redoc of your ZenML server: In the local case (i.e. using zenml up, the docs are available on http://127.0.0.1:8237/docs) PreviousEnvironment Variables NextHow do I...? Last updated 15 days ago
reference
https://docs.zenml.io/reference/api-reference
93
Deploy with custom images Deploying ZenML with custom Docker images. In most cases, deploying ZenML with the default zenmlhub/zenml-server Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image: You have implemented a custom artifact store for which you want to enable artifact visualizations or step logs in your dashboard. You have forked the ZenML repository and want to deploy a ZenML server based on your own fork because you made changes to the server / database logic. Deploying ZenML with custom Docker images is only possible for Docker or Helm deployments. Build and Push Custom ZenML Server Docker Image Here is how you can build a custom ZenML server Docker image: Set up a container registry of your choice. E.g., as an indivial developer you could create a free Docker Hub account and then set up a free Docker Hub repository. Clone ZenML (or your ZenML fork) and checkout the branch that you want to deploy, e.g., if you want to deploy ZenML version 0.41.0, runCopygit checkout release/0.41.0 Copy the ZenML base.Dockerfile, e.g.:Copycp docker/base.Dockerfile docker/custom.Dockerfile Modify the copied Dockerfile:Add additional dependencies:CopyRUN pip install <my_package>(Forks only) install local files instead of official ZenML:CopyRUN pip install -e .[server,secrets-aws,secrets-gcp,secrets-azure,secrets-hashicorp,s3fs,gcsfs,adlfs,connectors-aws,connectors-gcp,connectors-azure] Build and push an image based on your Dockerfile:Copydocker build -f docker/custom.Dockerfile . -t <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> --platform linux/amd64 docker push <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> If you want to verify your custom image locally, you can follow the Deploy a custom ZenML image via Docker section below to deploy the ZenML server locally first. Deploy ZenML with your custom image
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-custom-image
447
HUB_SHA: ${{ github.event.pull_request.head.sha }}ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} After configuring these values so they apply to your specific situation the rest of the template should work as is for you. Specifically you will need to install all requirements, connect to your ZenML Server, set an active stack and run a pipeline within your github action. steps: name: Check out repository code uses: actions/checkout@v3 uses: actions/setup-python@v4 with: python-version: '3.9' name: Install requirements run: | pip3 install -r requirements.txt name: Connect to ZenML server run: | zenml connect --url $ZENML_HOST --api-key $ZENML_API_KEY name: Set stack run: | zenml stack set ${{ env.ZENML_STACK }} name: Run pipeline run: | python run.py \ --pipeline end-to-end \ --dataset production \ --version ${{ env.ZENML_GITHUB_SHA }} \ --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} When you push to a branch now, that is within a Pull Request, this action will run automatically. (Optional) Comment Metrics onto the PR Finally you can configure your github action workflow to leave a report based on the pipeline that was run. Check out the template for this [here](https://github.com/zenml-io/zenml-gitflow/blob/main/.github/workflows/pipeline_run.yaml#L87-L99. PreviousConfigure a code repository NextAn end-to-end project Last updated 8 days ago
user-guide
https://docs.zenml.io/v/docs/user-guide/production-guide/ci-cd
344
MLflow Logging and visualizing experiments with MLflow. The MLflow Experiment Tracker is an Experiment Tracker flavor provided with the MLflow ZenML integration that uses the MLflow tracking service to log and visualize information from your pipeline steps (e.g. models, parameters, metrics). When would you want to use it? MLflow Tracking is a very popular tool that you would normally use in the iterative ML experimentation phase to track and visualize experiment results. That doesn't mean that it cannot be repurposed to track and visualize the results produced by your automated pipeline runs, as you make the transition toward a more production-oriented workflow. You should use the MLflow Experiment Tracker: if you have already been using MLflow to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML. if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets) if you or your team already have a shared MLflow Tracking service deployed somewhere on-premise or in the cloud, and you would like to connect ZenML to it to share the artifacts and metrics logged by your pipelines You should consider one of the other Experiment Tracker flavors if you have never worked with MLflow before and would rather use another experiment tracking tool that you are more familiar with. How do you deploy it? The MLflow Experiment Tracker flavor is provided by the MLflow ZenML integration, you need to install it on your local machine to be able to register an MLflow Experiment Tracker and add it to your stack: zenml integration install mlflow -y The MLflow Experiment Tracker can be configured to accommodate the following MLflow deployment scenarios:
stack-components
https://docs.zenml.io/v/docs/stack-components/experiment-trackers/mlflow
358
─────────────────────────────────────────────────┨┃ SESSION DURATION β”‚ N/A ┃ ┠──────────────────┼──────────────────────────────────────────────────────┨ ┃ EXPIRES IN β”‚ 11h59m56s ┃ ┠──────────────────┼──────────────────────────────────────────────────────┨ ┃ OWNER β”‚ default ┃ ┠──────────────────┼──────────────────────────────────────────────────────┨ ┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼──────────────────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼──────────────────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-06-19 19:35:24.090861 ┃ ┠──────────────────┼──────────────────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-06-19 19:35:24.090863 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠───────────────────────┼───────────┨ ┃ region β”‚ us-east-1 ┃ ┠───────────────────────┼───────────┨ ┃ aws_access_key_id β”‚ [HIDDEN] ┃ ┠───────────────────────┼───────────┨ ┃ aws_secret_access_key β”‚ [HIDDEN] ┃ ┠───────────────────────┼───────────┨ ┃ aws_session_token β”‚ [HIDDEN] ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━┛ AWS Federation Token Generates temporary STS tokens for federated users by impersonating another user.
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
476
🐞Debug and solve issues A guide to debug common issues and get help. If you stumbled upon this page, chances are you're facing issues with using ZenML. This page documents suggestions and best practices to let you debug, get help, and solve issues quickly. When to get help? We suggest going through the following checklist before asking for help: Search on Slack using the built-in Slack search function at the top of the page. Search on GitHub issues. Search the docs using the search bar in the top right corner of the page. Check out the common errors section below. Understand the problem by studying the additional logs and client/server logs. Chances are you'd find your answers there. If you can't find any clue, then it's time to post your question on Slack. How to post on Slack? When posting on Slack it's useful to provide the following information (when applicable) so that we get a complete picture before jumping into solutions. 1. System Information Let us know relevant information about your system. We recommend running the following in your terminal and attaching the output to your question. zenml info -a -s You can optionally include information about specific packages where you're having problems by using the -p option. For example, if you're having problems with the tensorflow package, you can run: zenml info -p tensorflow The output should look something like this: ZENML_LOCAL_VERSION: 0.40.2 ZENML_SERVER_VERSION: 0.40.2 ZENML_SERVER_DATABASE: mysql ZENML_SERVER_DEPLOYMENT_TYPE: alpha ZENML_CONFIG_DIR: /Users/my_username/Library/Application Support/zenml ZENML_LOCAL_STORE_DIR: /Users/my_username/Library/Application Support/zenml/local_stores ZENML_SERVER_URL: https://someserver.zenml.io ZENML_ACTIVE_REPOSITORY_ROOT: /Users/my_username/coding/zenml/repos/zenml PYTHON_VERSION: 3.9.13 ENVIRONMENT: native SYSTEM_INFO: {'os': 'mac', 'mac_version': '13.2'} ACTIVE_WORKSPACE: default ACTIVE_STACK: default ACTIVE_USER: some_user
how-to
https://docs.zenml.io/how-to/debug-and-solve-issues
456
Connect in with your User (interactive) You can authenticate your clients with the ZenML Server using the ZenML CLI and the web based login. This can be executed with the command: zenml connect --url https://... This command will start a series of steps to validate the device from where you are connecting that will happen in your browser. You can choose whether to mark your respective device as trusted or not. If you choose not to click Trust this device, a 24-hour token will be issued for authentication services. Choosing to trust the device will issue a 30-day token instead. To see all devices you've permitted, use the following command: zenml authorized-device list Additionally, the following command allows you to more precisely inspect one of these devices: zenml authorized-device describe <DEVICE_ID> For increased security, you can invalidate a token using the zenml device lock command followed by the device ID. This helps provide an extra layer of security and control over your devices. zenml authorized-device lock <DEVICE_ID> To keep things simple, we can summarize the steps: Use the zenml connect --url command to start a device flow and connect to a zenml server. Choose whether to trust the device when prompted. Check permitted devices with zenml devices list. Invalidate a token with zenml device lock .... Important notice Using the ZenML CLI is a secure and comfortable way to interact with your ZenML tenants. It's important to always ensure that only trusted devices are used to maintain security and privacy. Don't forget to manage your device trust levels regularly for optimal security. Should you feel a device trust needs to be revoked, lock the device immediately. Every token issued is a potential gateway to access your data, secrets and infrastructure. PreviousConnect to a server NextConnect with a Service Account Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/connecting-to-zenml/connect-in-with-your-user-interactive
373