{
  "cells": [
    {
      "cell_type": "markdown",
      "id": "0",
      "metadata": {},
      "source": [
        "# ZenML Quickstart\n",
        "\n",
        "Welcome to ZenML! This interactive notebook guides you through the core concepts of building and deploying ML pipelines.\n",
        "\n",
        "We'll run a simple pipeline locally, create snapshots, deploy it as an HTTP service, and explore cloud infrastructure options.\n",
        "\n",
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zenml-io/zenml/blob/main/examples/quickstart/quickstart.ipynb)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "1",
      "metadata": {},
      "source": [
        "## Step 0: Install ZenML and Setup\n",
        "\n",
        "First, let's install ZenML and clone the quickstart example if you're on Colab."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "2",
      "metadata": {},
      "outputs": [],
      "source": [
        "!pip install -q \"zenml[server]\"\n",
        "\n",
        "from zenml.environment import Environment\n",
        "\n",
        "if Environment.in_google_colab():\n",
        "    # Install Cloudflare Tunnel binary for public endpoint\n",
        "    !wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb && dpkg -i cloudflared-linux-amd64.deb\n",
        "\n",
        "    # Clone the quickstart example\n",
        "    !git clone -q -b main https://github.com/zenml-io/zenml\n",
        "    !cp -r zenml/examples/quickstart/* . && rm -rf zenml\n",
        "\n",
        "print(\"✅ ZenML installed successfully!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "3",
      "metadata": {},
      "source": [
        "## Step 1: Create Your First Pipeline\n",
        "\n",
        "Let's build a simple pipeline! A ZenML pipeline is just a function decorated with `@pipeline` that orchestrates steps."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "4",
      "metadata": {},
      "outputs": [],
      "source": [
        "from typing import Annotated\n",
        "\n",
        "from zenml import pipeline, step\n",
        "\n",
        "\n",
        "@step\n",
        "def simple_step(name: str = \"World\") -> Annotated[str, \"greeting\"]:\n",
        "    \"\"\"A simple step that returns a greeting.\"\"\"\n",
        "    message = f\"Hello, {name}! 👋 Welcome to ZenML!\"\n",
        "    return message\n",
        "\n",
        "\n",
        "@pipeline\n",
        "def simple_pipeline(name: str = \"World\") -> Annotated[str, \"greeting\"]:\n",
        "    \"\"\"A simple pipeline with one step.\"\"\"\n",
        "    return simple_step(name=name)\n",
        "\n",
        "\n",
        "print(\"✅ Pipeline defined!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5",
      "metadata": {},
      "source": [
        "## Step 2: Run Your Pipeline Locally\n",
        "\n",
        "Now let's execute the pipeline. ZenML automatically tracks the execution, artifacts, and metadata."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "6",
      "metadata": {},
      "outputs": [],
      "source": [
        "print(\"🚀 Running pipeline...\")\n",
        "result = simple_pipeline(name=\"ZenML User\")\n",
        "print(f\"\\n✅ Pipeline completed!\")\n",
        "print(f\"Run ID: {result.id}\")\n",
        "print(f\"Status: {result.status}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "eyu1xmieqpd",
      "metadata": {},
      "source": [
        "## Step 3: Start ZenML Server and View Dashboard\n",
        "\n",
        "Your pipeline run is now tracked in ZenML! Let's start the ZenML server to see the dashboard with your execution history, artifacts, and metadata."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "21f5ricq14h",
      "metadata": {},
      "outputs": [],
      "source": [
        "from zenml.client import Client\n",
        "from zenml.environment import Environment\n",
        "from zenml.zen_stores.rest_zen_store import RestZenStore\n",
        "\n",
        "client = Client()\n",
        "\n",
        "if not isinstance(client.zen_store, RestZenStore):\n",
        "    # Only spin up a local Dashboard in case you aren't already connected to a remote server\n",
        "    if Environment.in_google_colab():\n",
        "        # run ZenML through a cloudflare tunnel to get a public endpoint\n",
        "        !zenml login --local --port 8237 & cloudflared tunnel --url http://localhost:8237\n",
        "    else:\n",
        "        !zenml login --local\n",
        "\n",
        "print(\"✅ ZenML server started!\")\n",
        "print(\"Dashboard URL: http://localhost:8237\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "esmqff3fcq",
      "metadata": {},
      "source": [
        "## Step 4: Why Import from Modules for Snapshots and Deployments?\n",
        "\n",
        "For snapshots and deployments to work properly, ZenML needs to serialize and load your pipeline code from actual Python modules, not from notebook cells.\n",
        "\n",
        "**In this step**, we'll import the pipeline from the actual `pipelines/simple_pipeline.py` file to prepare for snapshots and deployments. This is a best practice for production workflows."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "r8lyeyuiyrb",
      "metadata": {},
      "outputs": [],
      "source": [
        "from pipelines.simple_pipeline import simple_pipeline as simple_pipeline_module\n",
        "\n",
        "# Use the module-based pipeline for snapshots and deployments\n",
        "simple_pipeline = simple_pipeline_module\n",
        "\n",
        "print(\"✅ Imported pipeline from module!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "7",
      "metadata": {},
      "source": [
        "## Step 5: Create a Pipeline Snapshot\n",
        "\n",
        "Snapshots make your pipelines reproducible by freezing the code, configuration, and container image. Create a snapshot directly with the Python SDK:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "8",
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create a snapshot of the pipeline\n",
        "print(\"📸 Creating a pipeline snapshot...\")\n",
        "snapshot = simple_pipeline.create_snapshot(name=\"simple_snapshot\")\n",
        "print(f\"✅ Snapshot created!\")\n",
        "print(f\"Snapshot ID: {snapshot.id}\")\n",
        "print(f\"Snapshot Name: {snapshot.name}\")\n",
        "print(f\"\\nSnapshots freeze your pipeline state for:\")\n",
        "print(f\"  • Reproducibility across runs\")\n",
        "print(f\"  • Sharing with team members\")\n",
        "print(f\"  • Deploying as production services\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "9",
      "metadata": {},
      "source": [
        "## Step 6: Deploy Your Pipeline as an HTTP Service\n",
        "\n",
        "Transform your pipeline into a real-time HTTP service for live inference. Use the Python SDK to deploy directly:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "10",
      "metadata": {},
      "outputs": [],
      "source": [
        "# Deploy the pipeline as an HTTP service\n",
        "print(\"🚀 Deploying pipeline as HTTP service...\")\n",
        "deployment = simple_pipeline.deploy(deployment_name=\"simple_deployment\")\n",
        "print(f\"✅ Deployment created!\")\n",
        "print(f\"Deployment URL: {deployment.url}\")\n",
        "print(f\"Status: {deployment.status}\")\n",
        "print(f\"\\n📋 Your pipeline is now a REST API!\")\n",
        "print(f\"\\nInvoke it with curl:\")\n",
        "print(f\"  curl -X POST {deployment.url}/invoke \\\\\")\n",
        "print(f\"    -H 'Content-Type: application/json' \\\\\")\n",
        "print(f'    -d \\'{{\"parameters\": {{\"name\": \"Alice\"}}}}\\'')"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "11",
      "metadata": {},
      "source": [
        "## Step 7: Invoke Your Deployed Service\n",
        "\n",
        "Now let's call our deployed service to process a request:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "12",
      "metadata": {},
      "outputs": [],
      "source": [
        "from zenml.deployers.utils import invoke_deployment\n",
        "\n",
        "print(\"🌐 Invoking deployed service...\")\n",
        "response = invoke_deployment(\n",
        "    deployment_name_or_id=\"simple_deployment\", name=\"Friend\"\n",
        ")\n",
        "print(f\"✅ Service response received!\")\n",
        "print(f\"\\nResponse:\")\n",
        "print(f\"  Status: {response.get('success')}\")\n",
        "print(f\"  Output: {response.get('outputs', {}).get('greeting', 'N/A')}\")\n",
        "print(f\"  Execution Time: {response.get('execution_time')}s\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "13",
      "metadata": {},
      "source": "# ZenML Quickstart - Summary\n\nYou've successfully learned the core ZenML concepts by running a complete workflow locally!\n\n## What You've Accomplished\n\n✅ **Pipelines & Steps** - Built composable workflows with automatic tracking\n✅ **Local Execution** - Ran pipelines with automatic artifact management\n✅ **Snapshots** - Froze pipeline state for reproducibility\n✅ **Deployments** - Transformed your pipeline into an HTTP service\n✅ **Dashboard** - Inspected runs and metadata in the ZenML UI\n\n## Running on Cloud Infrastructure\n\nTo run this same pipeline on cloud orchestrators (AWS, GCP, Azure):\n1. Deploy a ZenML server (ZenML Pro or self-hosted)\n2. Create a remote stack with cloud components\n3. Run the same code - ZenML handles containerization and infrastructure\n\nYour code doesn't change - just switch your stack!\n\n## Next Steps\n\nExplore other examples to see ZenML in action:\n- **[Deploying ML Models](https://github.com/zenml-io/zenml/tree/main/examples/deploying_ml_model)** - Production ML model serving\n- **[Deploying Agents](https://github.com/zenml-io/zenml/tree/main/examples/deploying_agent)** - LLM-powered agents\n- **[Agent Outer Loop](https://github.com/zenml-io/zenml/tree/main/examples/agent_outer_loop)** - Hybrid ML + agent systems\n- **[Full Documentation](https://docs.zenml.io/)** - Deep dives into all ZenML features"
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.9.0"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}
