{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Get started with Model Garden Terraform Deployment\n",
        "\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/open-models/get_started_with_model_garden_terraform_deployment.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fopen-models%2Fget_started_with_model_garden_terraform_deployment.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/open-models/get_started_with_model_garden_terraform_deployment.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/get_started_with_model_garden_terraform_deployment.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<p>\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/get_started_with_model_garden_terraform_deployment.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/get_started_with_model_garden_terraform_deployment.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/get_started_with_model_garden_terraform_deployment.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/get_started_with_model_garden_terraform_deployment.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/open-models/get_started_with_model_garden_terraform_deployment.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n",
        "</p>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| Authors |\n",
        "| --- |\n",
        "| [Ivan Nardini](https://github.com/inardini) |\n",
        "| [Eliza Huang](https://github.com/lizzij) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "Deploying open models on Vertex AI through Terraform provides a\n",
        "powerful infrastructure-as-code approach to manage your model\n",
        "deployments. Instead of clicking through the UI or writing custom\n",
        "API calls, you can define your entire Model Garden deployment in\n",
        "configuration files that are version-controlled, repeatable, and\n",
        "easily automated.\n",
        "\n",
        "The [Vertex AI Model Garden Terraform resource](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/vertex_ai_endpoint_with_model_garden_deployment#example-usage---vertex-ai-deploy-basic) simplifies deploying\n",
        "state-of-the-art open models by providing a declarative\n",
        "configuration interface. With Terraform, you can deploy models from\n",
        "  both the curated Model Garden catalog and Hugging Face Hub, manage\n",
        "  compute resources, and maintain consistent deployments across\n",
        "environments—all through simple configuration files.\n",
        "\n",
        "This tutorial shows how to use Terraform to deploy open models from\n",
        "  Vertex AI Model Garden.\n",
        "\n",
        "You will learn how to:\n",
        "\n",
        "- Set up Terraform for Vertex AI Model Garden deployments\n",
        "- Find models that you can deploy\n",
        "- Deploy your first Model Garden model using Terraform\n",
        "- Handle advanced configurations including custom machine types,\n",
        "accelerators, and replicas\n",
        "- Deploy models from Hugging Face Hub\n",
        "- Handle common deployment errors and troubleshooting"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YZX8wkND0-8u"
      },
      "source": [
        "### Prerequisites\n",
        "\n",
        "Before you begin, ensure you have:\n",
        "\n",
        "1. A Google Cloud project with billing enabled\n",
        "2. The Vertex AI API enabled\n",
        "3. Sufficient IAM permissions (Vertex AI Administrator or Editor role)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EY7VNjxt11sZ"
      },
      "source": [
        "### Install Vertex AI SDK and other required packages"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GkNCQ2fo17lQ"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --force-reinstall --quiet 'google-cloud-aiplatform>=1.93.1' 'openai' 'google-auth' 'requests' 'huggingface_hub'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install Terraform\n",
        "\n",
        "If you don't have Terraform installed, download and install it from [terraform.io](https://www.terraform.io/downloads)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JDYCmA221J0o"
      },
      "outputs": [],
      "source": [
        "# For Linux dist\n",
        "! wget https://releases.hashicorp.com/terraform/1.13.3/terraform_1.13.3_linux_amd64.zip\n",
        "! unzip terraform_1.13.3_linux_amd64.zip\n",
        "! sudo mv terraform /usr/local/bin/\n",
        "\n",
        "# Verify installation\n",
        "! terraform version"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nqwi-5ufWp_B"
      },
      "outputs": [],
      "source": [
        "# Use the environment variable if the user doesn't provide Project ID.\n",
        "import os\n",
        "\n",
        "import vertexai\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "# fmt: on\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
        "\n",
        "vertexai.init(project=PROJECT_ID, location=LOCATION)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Pqz-1quD2Ezb"
      },
      "source": [
        "## Create a Terraform workspace\n",
        "\n",
        "Create a new directory for your Terraform configuration"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sSxfZt9Y2GEa"
      },
      "outputs": [],
      "source": [
        "# Create a directory for your Terraform project\n",
        "! rm -rf ./model-garden-terraform && mkdir ./model-garden-terraform"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iioPtQRr2S4p"
      },
      "source": [
        "## Find the models that you can deploy\n",
        "\n",
        "In Vertex AI Model Garden, you can discover and deploy a wide range of open-source models. Many models are directly supported with optimized configurations for Vertex AI deployment.\n",
        "\n",
        "To find deployable models, you can:\n",
        "\n",
        "1. **Use the Model Garden UI**: Browse models at [console.cloud.google.com/vertex-ai/model-garden](https://console.cloud.google.com/vertex-ai/model-garden)\n",
        "2. **Use the Vertex AI SDK**: Run `model_garden.list_deployable_models()` to see available models programmatically\n",
        "3. **Check the documentation**: Review the [Model Garden documentation](https://cloud.google.com/vertex-ai/docs/start/explore-models)\n",
        "\n",
        "For this tutorial, we'll deploy models from the curated Model Garden catalog and Hugging Face Hub. Common model families include:\n",
        "\n",
        "- **Gemma** models: `publishers/google/models/gemma@gemma-3-1b-it`\n",
        "- **Llama** models: `publishers/meta/models/llama3-2@llama-3.2-1b-instruct`\n",
        "- **PaliGemma** models: `publishers/google/models/paligemma@paligemma-224-float32`\n",
        "- **Hugging Face** models: Any model ID from the Hugging Face Hub (e.g., `Qwen/Qwen3-0.6B`)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QEg4J_OF2gnv"
      },
      "source": [
        "## Deploy your first Model Garden model\n",
        "\n",
        "Let's deploy a Gemma model using Terraform with a basic configuration.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "RXD2MrU5JLuZ"
      },
      "outputs": [],
      "source": [
        "! rm -rf ./model-garden-terraform/01-basic-deployment && mkdir ./model-garden-terraform/01-basic-deployment"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ffs9KbK22jML"
      },
      "outputs": [],
      "source": [
        "basic_deploy_config = \"\"\"\n",
        "# Configure the Terraform provider\n",
        "terraform {\n",
        "  required_providers {\n",
        "    google = {\n",
        "      source  = \"hashicorp/google\"\n",
        "      version = \"7.5.0\"\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\n",
        "# Configure the Google Cloud provider\n",
        "provider \"google\" {\n",
        "  project = var.project_id\n",
        "  region  = var.region\n",
        "}\n",
        "\n",
        "# Define variables\n",
        "variable \"project_id\" {\n",
        "  description = \"Google Cloud Project ID\"\n",
        "  type        = string\n",
        "}\n",
        "\n",
        "variable \"region\" {\n",
        "  description = \"Google Cloud region\"\n",
        "  type        = string\n",
        "  default     = \"us-central1\"\n",
        "}\n",
        "\n",
        "# Deploy a Gemma model to Vertex AI\n",
        "resource \"google_vertex_ai_endpoint_with_model_garden_deployment\" \"gemma_deployment\" {\n",
        "  publisher_model_name = \"publishers/google/models/gemma3@gemma-3-1b-it\"\n",
        "  location             = var.region\n",
        "\n",
        "  model_config {\n",
        "    accept_eula = true\n",
        "  }\n",
        "}\n",
        "\n",
        "# Output the endpoint information\n",
        "output \"endpoint_id\" {\n",
        "  description = \"The ID of the deployed endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.gemma_deployment.id\n",
        "}\n",
        "\n",
        "output \"endpoint_name\" {\n",
        "  description = \"The name of the deployed endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.gemma_deployment.deployed_model_display_name\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "with open(\"./model-garden-terraform/01-basic-deployment/main.tf\", \"w\") as f:\n",
        "    f.write(basic_deploy_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_k8E7YEQ361R"
      },
      "source": [
        "### Create a variables file\n",
        "\n",
        "Create a `terraform.tfvars` file to set your project-specific values"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GqLBf-cU39Cy"
      },
      "outputs": [],
      "source": [
        "deploy_vars = f\"\"\"\n",
        "project_id=\"{PROJECT_ID}\"\n",
        "region=\"{LOCATION}\"\n",
        "\"\"\"\n",
        "\n",
        "with open(\"./model-garden-terraform/01-basic-deployment/terraform.tfvars\", \"w\") as f:\n",
        "    f.write(deploy_vars)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dheZZJwe4Dw1"
      },
      "source": [
        "### Initialize and deploy\n",
        "\n",
        "Run the following Terraform commands to deploy your model.\n",
        "\n",
        "Terraform will show you the planned changes and ask for confirmation. Type `yes` to proceed with the deployment.\n",
        "\n",
        "> **Note**: The deployment typically takes 10-15 minutes depending on the model size and compute resources."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WhJ02uMY4GuD"
      },
      "outputs": [],
      "source": [
        "# Set the current workspace\n",
        "! cd ./model-garden-terraform/01-basic-deployment && terraform init && terraform plan && terraform apply"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WHasl-ty4X-L"
      },
      "source": [
        "### Verify the deployment\n",
        "\n",
        "After deployment completes, you can verify the endpoint in the Google Cloud Console or use the following command."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "h4-E2V7c4dF_"
      },
      "outputs": [],
      "source": [
        "# Use Terraform to show the outputs\n",
        "! cd ./model-garden-terraform/01-basic-deployment && terraform output"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HoIgd1-XhNxY"
      },
      "source": [
        "You can always use the gcloud CLI as well.\n",
        "\n",
        "```bash\n",
        "# Get the endpoint information\n",
        "! gcloud ai endpoints list --region=$LOCATION --project=$PROJECT_ID\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sBv1FUmtHdM3"
      },
      "source": [
        "### Generate predictions\n",
        "\n",
        "After deploying your model, you can generate predictions using the Vertex AI API or SDK."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "i2BCxWDvHl5T"
      },
      "source": [
        "#### Using Vertex AI SDK for Python"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "X74_it1EHqYy"
      },
      "outputs": [],
      "source": [
        "from google.cloud import aiplatform\n",
        "\n",
        "# Initialize Vertex AI\n",
        "aiplatform.init(project=PROJECT_ID, location=LOCATION)\n",
        "\n",
        "# Set endpoint id\n",
        "ENDPOINT_ID = ! cd ./model-garden-terraform/01-basic-deployment && terraform output -raw endpoint_id\n",
        "ENDPOINT_ID = ENDPOINT_ID[0]\n",
        "\n",
        "# Get the endpoint\n",
        "endpoint = aiplatform.Endpoint(ENDPOINT_ID)\n",
        "\n",
        "# Generate prediction\n",
        "response = endpoint.predict(\n",
        "    instances=[\n",
        "        {\"prompt\": \"Tell me a joke about AI\", \"temperature\": 0.7, \"max_tokens\": 35}\n",
        "    ],\n",
        "    use_dedicated_endpoint=True,\n",
        ")\n",
        "print(response.predictions[0])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "V3E5OF8ZHxMU"
      },
      "source": [
        "#### Using OpenAI SDK (for compatible models)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qWT35swyH4I7"
      },
      "outputs": [],
      "source": [
        "import google.auth\n",
        "import openai\n",
        "from google.auth.transport.requests import Request\n",
        "\n",
        "# Get credentials\n",
        "creds, _ = google.auth.default()\n",
        "auth_req = Request()\n",
        "creds.refresh(auth_req)\n",
        "\n",
        "# Get the dedicated endpoint domain name\n",
        "endpoint_url = f\"https://{endpoint.gca_resource.dedicated_endpoint_dns}/v1beta1/{endpoint.resource_name}\"\n",
        "\n",
        "client = openai.OpenAI(base_url=endpoint_url, api_key=creds.token)\n",
        "\n",
        "# Generate prediction\n",
        "response = client.chat.completions.create(\n",
        "    model=\"\",\n",
        "    messages=[{\"role\": \"user\", \"content\": \"Tell me a joke about AI\"}],\n",
        "    temperature=0.7,\n",
        "    max_tokens=35,\n",
        ")\n",
        "\n",
        "print(response.choices[0].message.content)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zRpPtFF6hgyk"
      },
      "source": [
        "## Deploy Hugging Face models\n",
        "\n",
        "Terraform also supports deploying models directly from the Hugging Face Hub using the `hugging_face_model_id` parameter."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MvvdwTbqhxHL"
      },
      "source": [
        "### Basic Hugging Face deployment\n",
        "\n",
        "Deploy a Qwen model from Hugging Face."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "F1HkgVPDhybs"
      },
      "outputs": [],
      "source": [
        "! rm -rf ./model-garden-terraform/02-deploy-hf && mkdir ./model-garden-terraform/02-deploy-hf"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "aM-bUcGUh5f5"
      },
      "outputs": [],
      "source": [
        "deploy_hf_config = \"\"\"\n",
        "# Configure the Terraform provider\n",
        "terraform {\n",
        "  required_providers {\n",
        "    google = {\n",
        "      source  = \"hashicorp/google-beta\"\n",
        "      version = \"7.5.0\"\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\n",
        "# Configure the Google Cloud provider\n",
        "provider \"google\" {\n",
        "  project = var.project_id\n",
        "  region  = var.region\n",
        "}\n",
        "\n",
        "# Define variables\n",
        "variable \"project_id\" {\n",
        "  description = \"Google Cloud Project ID\"\n",
        "  type        = string\n",
        "}\n",
        "\n",
        "variable \"region\" {\n",
        "  description = \"Google Cloud region\"\n",
        "  type        = string\n",
        "  default     = \"us-central1\"\n",
        "}\n",
        "\n",
        "# Deploy Qwen model from Hugging Face\n",
        "resource \"google_vertex_ai_endpoint_with_model_garden_deployment\" \"qwen_deployment\" {\n",
        "  hugging_face_model_id = \"Qwen/Qwen2.5-0.5B\"\n",
        "  location              = var.region\n",
        "\n",
        "  model_config {\n",
        "    accept_eula = true\n",
        "  }\n",
        "}\n",
        "\n",
        "# Output the endpoint information\n",
        "output \"endpoint_id\" {\n",
        "  description = \"The ID of the deployed endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.qwen_deployment.id\n",
        "}\n",
        "\n",
        "output \"endpoint_name\" {\n",
        "  description = \"The name of the deployed endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.qwen_deployment.deployed_model_display_name\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "with open(\"./model-garden-terraform/02-deploy-hf/main.tf\", \"w\") as f:\n",
        "    f.write(deploy_hf_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "byU6Z52UiJ_1"
      },
      "source": [
        "### Set the variables\n",
        "\n",
        "Use the same variables you use before to deploy Gemma"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "voXl9PEaiJ_1"
      },
      "outputs": [],
      "source": [
        "! cp ./model-garden-terraform/01-basic-deployment/terraform.tfvars ./model-garden-terraform/02-deploy-hf/terraform.tfvars"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e9mEglJpiPaE"
      },
      "source": [
        "### Deploy the Hugging Face model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JJHsSAQciTsx"
      },
      "outputs": [],
      "source": [
        "! nohup bash -c \"cd ./model-garden-terraform/02-deploy-hf && terraform init && terraform plan && terraform apply -auto-approve\" > ./model-garden-terraform/02-deploy-hf/terraform.log 2>&1 &"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Kcl1oKCwtf5b"
      },
      "outputs": [],
      "source": [
        "! tail -f ./model-garden-terraform/02-deploy-hf/terraform.log"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "p9xnEUZxBRdW"
      },
      "source": [
        "## Advanced scenarios\n",
        "\n",
        "The Terraform resource supports advanced deployment configurations including custom machine types, accelerators, replica counts, and more."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4262zjbBBTzY"
      },
      "source": [
        "### Deploy with custom compute resources\n",
        "\n",
        "Here's an example deploying a PaliGemma model with specific machine types and GPU accelerators.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Z6LBgepNE0R7"
      },
      "source": [
        "#### Configuration options\n",
        "\n",
        "The `deploy_config` block supports the following options:\n",
        "\n",
        "- **machine_spec**: Defines the compute resources\n",
        "  - `machine_type`: Machine type (e.g., `g2-standard-16`, `n1-standard-4`)\n",
        "  - `accelerator_type`: GPU type (e.g., `NVIDIA_L4`, `NVIDIA_TESLA_T4`)\n",
        "  - `accelerator_count`: Number of GPUs per replica\n",
        "\n",
        "- **min_replica_count**: Minimum number of replicas (for autoscaling)\n",
        "- **max_replica_count**: Maximum number of replicas (for autoscaling)\n",
        "\n",
        "\n",
        "> **Note**: This deployment uses a `g2-standard-16` machine with NVIDIA L4 GPU. Make sure you have sufficient quota for these resources in your project."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GhzQOMqqJOk1"
      },
      "outputs": [],
      "source": [
        "! rm -rf ./model-garden-terraform/03-deployment-with-config && mkdir ./model-garden-terraform/03-deployment-with-config"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3zkNmLD2BWPQ"
      },
      "outputs": [],
      "source": [
        "deploy_with_config = \"\"\"\n",
        "# Configure the Terraform provider\n",
        "terraform {\n",
        "  required_providers {\n",
        "    google = {\n",
        "      source  = \"hashicorp/google\"\n",
        "      version = \"7.5.0\"\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\n",
        "# Configure the Google Cloud provider\n",
        "provider \"google\" {\n",
        "  project = var.project_id\n",
        "  region  = var.region\n",
        "}\n",
        "\n",
        "# Define variables\n",
        "variable \"project_id\" {\n",
        "  description = \"Google Cloud Project ID\"\n",
        "  type        = string\n",
        "}\n",
        "\n",
        "variable \"region\" {\n",
        "  description = \"Google Cloud region\"\n",
        "  type        = string\n",
        "  default     = \"us-central1\"\n",
        "}\n",
        "\n",
        "# Deploy PaliGemma with custom compute resources\n",
        "resource \"google_vertex_ai_endpoint_with_model_garden_deployment\" \"paligemma_deployment\" {\n",
        "  publisher_model_name = \"publishers/google/models/paligemma@paligemma-224-float32\"\n",
        "  location             = var.region\n",
        "\n",
        "  model_config {\n",
        "    accept_eula = true\n",
        "  }\n",
        "\n",
        "  deploy_config {\n",
        "    dedicated_resources {\n",
        "      machine_spec {\n",
        "        machine_type      = \"g2-standard-16\"\n",
        "        accelerator_type  = \"NVIDIA_L4\"\n",
        "        accelerator_count = 1\n",
        "      }\n",
        "      min_replica_count = 1\n",
        "      max_replica_count = 3\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\n",
        "# Output the endpoint information\n",
        "output \"endpoint_id\" {\n",
        "  description = \"The ID of the deployed endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.paligemma_deployment.id\n",
        "}\n",
        "\n",
        "output \"endpoint_name\" {\n",
        "  description = \"The name of the deployed endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.paligemma_deployment.deployed_model_display_name\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "with open(\"./model-garden-terraform/03-deployment-with-config/main.tf\", \"w\") as f:\n",
        "    f.write(deploy_with_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "09V_ZpIiDx31"
      },
      "source": [
        "#### Set the variables\n",
        "\n",
        "Use the same variables you use before to deploy Gemma"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qGdlPHtlD4Qv"
      },
      "outputs": [],
      "source": [
        "! cp ./model-garden-terraform/01-basic-deployment/terraform.tfvars ./model-garden-terraform/03-deployment-with-config/terraform.tfvars"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c8hIC6EmCFya"
      },
      "source": [
        "#### Deploy the model\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4rwXm-nut1xN"
      },
      "outputs": [],
      "source": [
        "! nohup bash -c \"cd ./model-garden-terraform/03-deployment-with-config && terraform init && terraform plan && terraform apply -auto-approve\" > ./model-garden-terraform/03-deployment-with-config/terraform.log 2>&1 &"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MibDxTdrhGzC"
      },
      "source": [
        "#### Check the deployment status"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7VIwcua9hGzC"
      },
      "outputs": [],
      "source": [
        "# View all deployed endpoints\n",
        "! cd ./model-garden-terraform/03-deployment-with-config && terraform output"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "O3m-rPmee7u7"
      },
      "source": [
        "### Deploy multiple models\n",
        "\n",
        "You can deploy multiple models in the same Terraform configuration. This example deploys both a Gemma text model and a PaliGemma vision model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-AWP604qgN3c"
      },
      "source": [
        "#### Append multiple models to the main module\n",
        "\n",
        "In this case, we combine the previous deployment in one unique main module"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "T-cYmN4ae_qs"
      },
      "outputs": [],
      "source": [
        "! rm -rf ./model-garden-terraform/04-multiple-models && mkdir ./model-garden-terraform/04-multiple-models"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tfBOHDYvfdIc"
      },
      "outputs": [],
      "source": [
        "deploy_multiple_models_config = \"\"\"\n",
        "# Configure the Terraform provider\n",
        "terraform {\n",
        "  required_providers {\n",
        "    google = {\n",
        "      source  = \"hashicorp/google\"\n",
        "      version = \"7.5.0\"\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\n",
        "# Configure the Google Cloud provider\n",
        "provider \"google\" {\n",
        "  project = var.project_id\n",
        "  region  = var.region\n",
        "}\n",
        "\n",
        "# Define variables\n",
        "variable \"project_id\" {\n",
        "  description = \"Google Cloud Project ID\"\n",
        "  type        = string\n",
        "}\n",
        "\n",
        "variable \"region\" {\n",
        "  description = \"Google Cloud region\"\n",
        "  type        = string\n",
        "  default     = \"us-central1\"\n",
        "}\n",
        "\n",
        "# Deploy Gemma model\n",
        "resource \"google_vertex_ai_endpoint_with_model_garden_deployment\" \"gemma\" {\n",
        "  publisher_model_name = \"publishers/google/models/gemma3@gemma-3-1b-it\"\n",
        "  location             = var.region\n",
        "\n",
        "  model_config {\n",
        "    accept_eula = true\n",
        "  }\n",
        "}\n",
        "\n",
        "# Deploy PaliGemma model with custom resources\n",
        "resource \"google_vertex_ai_endpoint_with_model_garden_deployment\" \"paligemma\" {\n",
        "  publisher_model_name = \"publishers/google/models/paligemma@paligemma-224-float32\"\n",
        "  location             = var.region\n",
        "\n",
        "  model_config {\n",
        "    accept_eula = true\n",
        "  }\n",
        "\n",
        "  deploy_config {\n",
        "    dedicated_resources {\n",
        "      machine_spec {\n",
        "        machine_type      = \"g2-standard-16\"\n",
        "        accelerator_type  = \"NVIDIA_L4\"\n",
        "        accelerator_count = 1\n",
        "      }\n",
        "      min_replica_count = 1\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\n",
        "# Output the Gemma endpoint information\n",
        "output \"gemma_endpoint_id\" {\n",
        "  description = \"The ID of the Gemma endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.gemma.id\n",
        "}\n",
        "\n",
        "output \"gemma_endpoint_name\" {\n",
        "  description = \"The name of the Gemma endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.gemma.deployed_model_display_name\n",
        "}\n",
        "\n",
        "# Output the PaliGemma endpoint information\n",
        "output \"paligemma_endpoint_id\" {\n",
        "  description = \"The ID of the PaliGemma endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.paligemma.id\n",
        "}\n",
        "\n",
        "output \"paligemma_endpoint_name\" {\n",
        "  description = \"The name of the PaliGemma endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.paligemma.deployed_model_display_name\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "with open(\"./model-garden-terraform/04-multiple-models/main.tf\", \"w\") as f:\n",
        "    f.write(deploy_multiple_models_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ojODg5dqgrYd"
      },
      "source": [
        "#### Set the variables\n",
        "\n",
        "Use the same variables you use before to deploy Gemma"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9NXigBOSgrYe"
      },
      "outputs": [],
      "source": [
        "! cp ./model-garden-terraform/01-basic-deployment/terraform.tfvars ./model-garden-terraform/04-multiple-models/terraform.tfvars"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bmBLkS1Hgxtl"
      },
      "source": [
        "#### Deploy both models\n",
        "\n",
        "> **Note**: Deploying multiple models will take longer (20-30 minutes total). Each model deploys to its own endpoint.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "czi0GZxJg2ky"
      },
      "outputs": [],
      "source": [
        "! nohup bash -c \"cd ./model-garden-terraform/04-multiple-models && terraform init && terraform plan && terraform apply -auto-approve\" > ./model-garden-terraform/04-multiple-models/terraform.log 2>&1 &"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Bx9iNjlg6sLV"
      },
      "outputs": [],
      "source": [
        "! tail -f ./model-garden-terraform/04-multiple-models/terraform.log"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0OX4ioc-g9N5"
      },
      "source": [
        "#### Check the deployment status"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "EL8DdsrWg_xT"
      },
      "outputs": [],
      "source": [
        "! cd ./model-garden-terraform/04-multiple-models && terraform output"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "swdTXLwEiZH8"
      },
      "source": [
        "### Deploy gated Hugging Face models\n",
        "\n",
        "For gated models that require authentication, you'll need to provide a Hugging Face access token. This example deploys Meta's Llama model from Hugging Face."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ussRWloOixtb"
      },
      "source": [
        "#### Create the Terraform configuration\n",
        "\n",
        "In this case, you pass the additional HF variable."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ZRcVxt42jPth"
      },
      "outputs": [],
      "source": [
        "! rm -rf ./model-garden-terraform/05-gated-hf && mkdir -p ./model-garden-terraform/05-gated-hf"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cXEjV2E1i5NP"
      },
      "outputs": [],
      "source": [
        "hf_gated_deploy_config = \"\"\"\n",
        "# Configure the Terraform provider\n",
        "terraform {\n",
        "  required_providers {\n",
        "    google = {\n",
        "      source  = \"hashicorp/google-beta\"\n",
        "      version = \"7.5.0\"\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\n",
        "# Configure the Google Cloud provider\n",
        "provider \"google\" {\n",
        "  project = var.project_id\n",
        "  region  = var.region\n",
        "}\n",
        "\n",
        "# Define variables\n",
        "variable \"project_id\" {\n",
        "  description = \"Google Cloud Project ID\"\n",
        "  type        = string\n",
        "}\n",
        "\n",
        "variable \"region\" {\n",
        "  description = \"Google Cloud region\"\n",
        "  type        = string\n",
        "  default     = \"us-central1\"\n",
        "}\n",
        "\n",
        "variable \"hugging_face_token\" {\n",
        "  description = \"Hugging Face access token for gated models\"\n",
        "  type        = string\n",
        "  sensitive   = true\n",
        "}\n",
        "\n",
        "# Deploy a gated Hugging Face model (Meta Llama)\n",
        "resource \"google_vertex_ai_endpoint_with_model_garden_deployment\" \"llama_deployment\" {\n",
        "  hugging_face_model_id = \"meta-llama/Llama-3.2-1B\"\n",
        "  location              = var.region\n",
        "\n",
        "  model_config {\n",
        "    accept_eula               = true\n",
        "    hugging_face_access_token = var.hugging_face_token\n",
        "  }\n",
        "}\n",
        "\n",
        "# Output the endpoint information\n",
        "output \"endpoint_id\" {\n",
        "  description = \"The ID of the deployed endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.llama_deployment.id\n",
        "}\n",
        "\n",
        "output \"endpoint_name\" {\n",
        "  description = \"The name of the deployed endpoint\"\n",
        "  value       = google_vertex_ai_endpoint_with_model_garden_deployment.llama_deployment.deployed_model_display_name\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "with open(\"./model-garden-terraform/05-gated-hf/main.tf\", \"w\") as f:\n",
        "    f.write(hf_gated_deploy_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "D4ts5mbe-G3y"
      },
      "source": [
        "#### Create a variables file\n",
        "\n",
        "Create a `terraform.tfvars` file to set your project-specific values. In this case we also add the HF_TOKEN variable."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ka_Eu6jr8Jmw"
      },
      "outputs": [],
      "source": [
        "# Authenticate with Hugging Face\n",
        "from huggingface_hub import interpreter_login\n",
        "\n",
        "# Get it from: https://huggingface.co/settings/tokens\n",
        "interpreter_login()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ES9eeCcwihOW"
      },
      "outputs": [],
      "source": [
        "# Set your Hugging Face access token\n",
        "from huggingface_hub import get_token\n",
        "\n",
        "HF_TOKEN = get_token()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FEydDY2T-G3z"
      },
      "outputs": [],
      "source": [
        "deploy_vars = f\"\"\"\n",
        "project_id=\"{PROJECT_ID}\"\n",
        "region=\"{LOCATION}\"\n",
        "hugging_face_token=\"{HF_TOKEN}\"\n",
        "\"\"\"\n",
        "\n",
        "with open(\"./model-garden-terraform/05-gated-hf/terraform.tfvars\", \"w\") as f:\n",
        "    f.write(deploy_vars)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C4EIDOXnjdop"
      },
      "source": [
        "#### Deploy the gated model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Jq7p0FTfjg7A"
      },
      "outputs": [],
      "source": [
        "! nohup bash -c \"cd ./model-garden-terraform/05-gated-hf && terraform init && terraform plan && terraform apply -auto-approve\" > ./model-garden-terraform/05-gated-hf/terraform.log 2>&1 &"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "D4zuB_JV_qXn"
      },
      "outputs": [],
      "source": [
        "! tail -f ./model-garden-terraform/05-gated-hf/terraform.log"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2a4e033321ad"
      },
      "source": [
        "## Cleaning up\n",
        "\n",
        "To avoid incurring unnecessary charges, clean up the resources when you're done."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YzaEgEt3ZQzK"
      },
      "source": [
        "### Destroy specific resources\n",
        "\n",
        "To destroy only specific resources. Terraform will show you all resources that will be deleted and ask for confirmation. Type `yes` to proceed.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6RGZDyXDZIeA"
      },
      "outputs": [],
      "source": [
        "# Destroy a specific endpoint\n",
        "! cd ./model-garden-terraform/01-basic-deployment && terraform destroy -target=google_vertex_ai_endpoint_with_model_garden_deployment.gemma_deployment"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dlYwleRwZlf8"
      },
      "source": [
        "### Destroy all resources"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7f40aeca"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "def list_subfolders(folder_path):\n",
        "    \"\"\"Lists all subfolders in a given folder path.\"\"\"\n",
        "    return [\n",
        "        os.path.join(folder_path, d)\n",
        "        for d in os.listdir(folder_path)\n",
        "        if os.path.isdir(os.path.join(folder_path, d))\n",
        "    ]\n",
        "\n",
        "\n",
        "# Replace 'model-garden-terraform' with the actual folder path you want to list\n",
        "folder_to_check = \"./model-garden-terraform\"\n",
        "subfolders = list_subfolders(folder_to_check)\n",
        "\n",
        "# Delete model\n",
        "for folder in subfolders:\n",
        "    print(f\"Destroying model in {folder}...\")\n",
        "    ! cd {folder} && terraform destroy -auto-approve\n",
        "    print(f\"Destroyed model in {folder}!\\n\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hJ-PkePaZw-Z"
      },
      "source": [
        "### Verify cleanup\n",
        "\n",
        "After destruction, verify that resources were deleted."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WitVO-lgZ1xL"
      },
      "outputs": [],
      "source": [
        "# Verify Terraform state is clean\n",
        "! cd ./model-garden-terraform/01-basic-deployment && terraform show"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9PNaL7yAjrXx"
      },
      "source": [
        "Again you can also use gcloud CLI.\n",
        "\n",
        "```bash\n",
        "# Check for remaining endpoints\n",
        "! gcloud ai endpoints list --region=$LOCATION --project=$PROJECT_ID\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lZwazE0daIMy"
      },
      "source": [
        "## Best practices\n",
        "\n",
        "When using Terraform for Model Garden deployments:\n",
        "\n",
        "1. **Use version control**: Store your Terraform configurations in Git to track changes and enable collaboration\n",
        "2. **Use remote state**: Configure remote state storage (e.g., Google Cloud Storage) for team environments\n",
        "3. **Separate environments**: Use Terraform workspaces or separate directories for dev/staging/prod\n",
        "4. **Use variables**: Parameterize your configurations with variables for reusability\n",
        "5. **Tag resources**: Add labels to resources for better organization and cost tracking\n",
        "6. **Plan before apply**: Always run `terraform plan` to preview changes before applying\n",
        "7. **Secure secrets**: Use environment variables or secret management tools for sensitive data (API tokens, etc.)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SagKTI6JaORZ"
      },
      "source": [
        "## Next steps\n",
        "\n",
        "Now that you've learned how to deploy models with Terraform, you can:\n",
        "\n",
        "- Explore more models in the [Model Garden catalog](https://console.cloud.google.com/vertex-ai/model-garden)\n",
        "- Learn about [Vertex AI Prediction](https://cloud.google.com/vertex-ai/docs/predictions/get-predictions) for inference\n",
        "- Automate deployments with [CI/CD pipelines](https://cloud.google.com/docs/terraform/best-practices-for-terraform#cicd)\n",
        "- Learn more about [Terraform on Google Cloud](https://cloud.google.com/docs/terraform)\n"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "get_started_with_model_garden_terraform_deployment.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
