{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1Vs2C5ZBkrjmrHTfAyz77hpV"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VoJZgH6X77se"
      },
      "source": [
        "# Open Source Models (Gemma) as a agent with Gemini Enterprise"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qfDq3m0R8Fe1"
      },
      "source": [
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/oss_model_with_gemini_enterprise.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fsearch%2Fgemini-enterprise%2Foss_model_with_gemini_enterprise.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/search/gemini-enterprise/oss_model_with_gemini_enterprise.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/oss_model_with_gemini_enterprise.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "\n",
        "\n",
        "<br>\n",
        "<br>\n",
        "<br>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/oss_model_with_gemini_enterprise.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/oss_model_with_gemini_enterprise.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/oss_model_with_gemini_enterprise.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/oss_model_with_gemini_enterprise.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/search/gemini-enterprise/oss_model_with_gemini_enterprise.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n",
        "\n",
        "TODO: Update links"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UD1Cy9O88Zn9"
      },
      "source": [
        "| Author |\n",
        "| --- |\n",
        "| Parag Mhatre |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "28b524cdd168"
      },
      "source": [
        "## Building Agents with Open-Source Models and Gemini Enterprise\n",
        "\n",
        "This repository contains a comprehensive notebook that provides an end-to-end guide for deploying open-source Large Language Models (LLMs), like Gemma, and building a conversational agent that interacts with the deployed model.\n",
        "\n",
        "### Overview 📝\n",
        "\n",
        "This notebook walks you through the entire lifecycle of creating and deploying a custom agent powered by an open-source LLM. The key stages covered are:\n",
        "\n",
        "1. Model Deployment: Deploying an open-source model (Gemma) to a serving endpoint.\n",
        "\n",
        "2. Agent Construction: Building an agent that uses the deployed model.\n",
        "\n",
        "3. Root Agent Configuration: Providing two distinct options for the agent's core logic:\n",
        "\n",
        "   3.1. Option 1: Using Google Gemini model as the primary \"root agent\" to orchestrate tasks and interact with the specialized Gemma model.\n",
        "\n",
        "   3.2. Option 2: Using LiteLLM to enable the deployed Gemma model itself to function as the root agent, offering a fully open-source-based solution.\n",
        "\n",
        "4. Testing & Deployment: Testing the agent locally with the Agent Development Kit (ADK), deploying it to the Agent Engine, and finally registering it in Gemini Enterprise for user access.\n",
        "\n",
        "### Background 💡\n",
        "The demand for specialized, fine-tuned LLMs is rapidly growing. Organizations are increasingly looking to leverage open-source models, training them on proprietary data to create experts in specific domains like customer support, internal documentation, or financial analysis.\n",
        "\n",
        "However, a key challenge remains: how to make these powerful, fine-tuned models easily accessible to business users? Simply deploying a model isn't enough. You need a robust, interactive, and discoverable interface. This is where agents come in. An agent acts as a smart layer on top of the model, enabling seamless conversation and integration with other tools.\n",
        "\n",
        "This notebook provides a practical blueprint for bridging that gap—from deploying a custom open-source model to making it available as a fully functional agent in Gemini Enterprise.\n",
        "\n",
        "### Business Scenarios 🏢\n",
        "This solution is ideal for organizations that want to:\n",
        "\n",
        "1. Create Expert Chatbots: Fine-tune a model like Gemma on an internal knowledge base (e.g., HR policies, technical documentation, product specs). The resulting agent can then provide instant, accurate answers to employee or customer queries.\n",
        "\n",
        "2. Develop Specialized Assistants: Build an agent trained on financial reports or market data to assist analysts with data retrieval and summarization.\n",
        "\n",
        "3. Ensure Data Privacy and Control: Use open-source models hosted within their own cloud environment to maintain full control over sensitive data, instead of relying on external model APIs.\n",
        "\n",
        "4. Reduce Costs: Leverage powerful, cost-effective open-source models as an alternative to proprietary, closed-source options.\n",
        "\n",
        "### Notebook Steps 🚀\n",
        "The notebook is structured into a clear, step-by-step process. Each section contains the necessary code and explanations to guide you through the implementation.\n",
        "\n",
        "#### Step 1: Setup and Prerequisites\n",
        "> This initial step involves importing the required libraries and configuring your project environment variables. It ensures all dependencies are in place before you begin.\n",
        "\n",
        "#### Step 2: Deploying the Open-Source Model (Gemma)\n",
        "> Here, you will take the open-source Gemma model and deploy it to a serving endpoint (e.g., on Vertex AI). This makes the model available to receive requests via an API, a crucial first step for agent integration.\n",
        "\n",
        "#### Step 3: Configuring the Root Agent\n",
        "> This is a critical decision point where you choose the \"brain\" of your agent. The notebook provides two paths:\n",
        "\n",
        "> **Option A: Using Gemini as the Root Agent**\n",
        "\n",
        "> In this configuration, the powerful, multi-modal Gemini model acts as the primary orchestrator. It understands the user's intent and can decide when to call the specialized, deployed Gemma model for specific tasks. This is a great hybrid approach that combines the broad capabilities of Gemini with the specialized knowledge of your fine-tuned model.\n",
        "\n",
        "> **Option B: Using Gemma as the Root Agent via LiteLLM**\n",
        "\n",
        "> This approach uses LiteLLM, a clever library that creates a standardized interface for interacting with various LLMs. We use it to wrap our deployed Gemma model, allowing it to serve as the root agent itself. This is the perfect choice for creating a solution built entirely on open-source components.\n",
        "\n",
        "#### Step 4: Building and Testing the Agent with ADK\n",
        "> With the model deployed and the root agent configured, you will now formally build the agent. The notebook then guides you through using the Agent Development Kit (ADK) to run the agent locally. This allows for rapid testing and debugging to ensure the agent behaves as expected before a full deployment.\n",
        "\n",
        "#### Step 5: Deploying the Agent to Agent Engine\n",
        "> Once the agent is validated locally, this step shows you how to deploy it to the scalable, managed Agent Engine. This moves your agent from a local test environment to a production-ready platform.\n",
        "\n",
        "#### Step 6: Registering the Agent in Gemini Enterprise\n",
        "> In the final step, you will register your deployed agent with Gemini Enterprise. This makes the agent discoverable and accessible to authorized end-users within your organization, allowing them to easily find and interact with your new, custom-built assistant."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "k0nKem_m_tOt"
      },
      "source": [
        "#### Step 1: Setup and Prerequisites"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5bb4021007d1"
      },
      "outputs": [],
      "source": [
        "# TODO for Developer: Update project id.\n",
        "PROJECT_NUMBER = \"[your-project-number]\"\n",
        "\n",
        "# TODO for Developer: Update project name.\n",
        "PROJECT_ID = \"[your-project-id]\"\n",
        "\n",
        "GEMMA_PARAMETER = \"gemma3-1b\"\n",
        "REGION = \"us-central1\"\n",
        "SERVICE_NAME = \"gemma-3-1b\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bc0a45394998"
      },
      "outputs": [],
      "source": [
        "# Setup Google Cloud Product project.\n",
        "!gcloud config set project {PROJECT_ID}\n",
        "!gcloud config get-value project\n",
        "\n",
        "# Enable required services.\n",
        "!gcloud services enable iam.googleapis.com"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6bbb6c1939e6"
      },
      "source": [
        "#### Step 2: Deploying the Open-Source Model (Gemma)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "346fb246646c"
      },
      "outputs": [],
      "source": [
        "!gcloud run deploy {SERVICE_NAME} \\\n",
        "   --image us-docker.pkg.dev/cloudrun/container/gemma/{GEMMA_PARAMETER} \\\n",
        "   --concurrency 4 \\\n",
        "   --cpu 8 \\\n",
        "   --set-env-vars OLLAMA_NUM_PARALLEL=4 \\\n",
        "   --gpu 1 \\\n",
        "   --gpu-type nvidia-l4 \\\n",
        "   --max-instances 1 \\\n",
        "   --memory 32Gi \\\n",
        "   --no-allow-unauthenticated \\\n",
        "   --no-cpu-throttling \\\n",
        "   --timeout=600 \\\n",
        "   --region {REGION} \\\n",
        "   --no-gpu-zonal-redundancy"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "43e765d31f23"
      },
      "source": [
        "#### Step 3: Configuring the Root Agent"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6802b8bbe4af"
      },
      "outputs": [],
      "source": [
        "pip install --quiet google-adk==1.7.0 litellm==1.74.7 ollama==0.5.1"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 207,
      "metadata": {
        "id": "cd76cb439634"
      },
      "outputs": [],
      "source": [
        "import google.auth.transport.requests\n",
        "import google.oauth2.id_token\n",
        "from google.adk.agents import LlmAgent\n",
        "from google.genai import types\n",
        "from ollama import Client\n",
        "\n",
        "api_base_url = \"https://gemma-3-1b-[your-project-number].us-central1.run.app\"\n",
        "model_name_at_endpoint = \"ollama/gemma3:1b\"\n",
        "\n",
        "# Create a Google Auth request object\n",
        "auth_req = google.auth.transport.requests.Request()\n",
        "\n",
        "# Generate the ID token\n",
        "id_token = google.oauth2.id_token.fetch_id_token(auth_req, api_base_url)\n",
        "\n",
        "# Make the request with the ID token\n",
        "auth_headers = {\"Authorization\": f\"Bearer {id_token}\"}\n",
        "\n",
        "\n",
        "def get_answer(question: str):\n",
        "    \"\"\"/\n",
        "    Answer the question using gemma model.\n",
        "\n",
        "    Args:\n",
        "        question: user's question.\n",
        "\n",
        "    Returns:\n",
        "        returns detailed answer based on user's question.\n",
        "    \"\"\"\n",
        "    try:\n",
        "        client = Client(host=api_base_url, headers=auth_headers)\n",
        "        response = client.chat(\n",
        "            model=\"gemma3:1b\",\n",
        "            messages=[\n",
        "                {\n",
        "                    \"role\": \"user\",\n",
        "                    \"content\": question,\n",
        "                },\n",
        "            ],\n",
        "        )\n",
        "\n",
        "        return response.message.content\n",
        "    except Exception as e:\n",
        "        print(e)\n",
        "        return \"Not able to provide answer.\"\n",
        "\n",
        "\n",
        "# Option 1: Use Gemma as root agent.\n",
        "# from google.adk.models.lite_llm import LiteLlm\n",
        "# model=LiteLlm(\n",
        "#         model=model_name_at_endpoint,\n",
        "#         api_base=api_base_url,\n",
        "#         extra_headers=auth_headers\n",
        "#     )\n",
        "\n",
        "# Option 2: Use Gemini as root agent and use gemma agent as supportive agent for better results.\n",
        "\n",
        "gemma_agent = LlmAgent(\n",
        "    model=\"gemini-2.5-pro\",\n",
        "    name=\"gemma_agent\",\n",
        "    instruction=\"\"\"1. You are a helpful assistant, your task is the answer the question.\n",
        "                   2. Only answer the question asked by user, do not ask follow up questions .\n",
        "                   3. if no answer produce from function call tool then, you can mention that 'Sorry, answer can't be generated.'.\n",
        "                   4. Only answer the question based on provided context from tool. Do not add additinoal text. Preferably return the answer as is.\n",
        "                    \"\"\",\n",
        "    tools=[get_answer],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1f38a350f0ee"
      },
      "source": [
        "#### Step 4: Building and Testing the Agent with ADK"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "00d4b030125f"
      },
      "outputs": [],
      "source": [
        "from vertexai.preview import reasoning_engines\n",
        "\n",
        "app = reasoning_engines.AdkApp(\n",
        "    agent=gemma_agent,\n",
        "    enable_tracing=True,\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "130cbe7d0d11"
      },
      "outputs": [],
      "source": [
        "session = app.create_session(user_id=\"u_1232\")\n",
        "session"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "801692e7994a"
      },
      "outputs": [],
      "source": [
        "query = \"where is india located?\"\n",
        "contents = types.Content(role=\"user\", parts=[types.Part.from_text(text=query)])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d590093630ce"
      },
      "outputs": [],
      "source": [
        "contents"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6c9559e54427"
      },
      "outputs": [],
      "source": [
        "for event in app.stream_query(\n",
        "    user_id=\"u_1232\",\n",
        "    session_id=session.id,\n",
        "    message=contents.model_dump(),\n",
        "):\n",
        "    print(event)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "31bd98e3819f"
      },
      "source": [
        "#### Step 5: Deploying the Agent to Agent Engine"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "c066be5562e3"
      },
      "outputs": [],
      "source": [
        "import vertexai\n",
        "\n",
        "PROJECT_NUMBER = PROJECT_NUMBER\n",
        "LOCATION = \"us-central1\"  # TODO for Developer : Update region here.\n",
        "STAGING_BUCKET = (\n",
        "    \"gs://[bucket-name]\"  # TODO for Developer : Update GCS bucket name here.\n",
        ")\n",
        "vertexai.init(project=PROJECT_NUMBER, location=LOCATION, staging_bucket=STAGING_BUCKET)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7fded9eca51a"
      },
      "outputs": [],
      "source": [
        "from vertexai import agent_engines\n",
        "\n",
        "remote_app = agent_engines.create(\n",
        "    display_name=\"Gemma Agent v7\",\n",
        "    agent_engine=app,\n",
        "    requirements=[\n",
        "        \"litellm (==1.74.7)\",\n",
        "        \"google-adk (==1.7.0)\",\n",
        "        \"google-genai (==1.24.0)\",\n",
        "        \"pydantic (==2.11.7)\",\n",
        "        \"ollama (==0.5.1)\",\n",
        "    ],\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7bea20750e56"
      },
      "outputs": [],
      "source": [
        "remote_app.resource_name"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9355b5b8f624"
      },
      "source": [
        "#### Step 6: Registering the Agent in Gemini Enterprise"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2b5a9e686af1"
      },
      "source": [
        "**Create oauth consent and mention the \"clientId\" and \"clientSecret\".**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3635a286d182"
      },
      "outputs": [],
      "source": [
        "%%bash\n",
        "\n",
        "curl -X POST \\\n",
        "-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n",
        "-H \"Content-Type: application/json\" \\\n",
        "-H \"X-Goog-User-Project: [your-project-id]\" \\\n",
        "\"https://discoveryengine.googleapis.com/v1alpha/projects/[your-project-id]/locations/global/authorizations?authorizationId=customhr9893\" \\\n",
        "-d '{\n",
        "\"name\": \"projects/[your-project-id]/locations/global/authorizations/customhr9893\",\n",
        "\"serverSideOauth2\": {\n",
        "\"clientId\": \"[UPDATE-CLIENT-ID]\",\n",
        "\"clientSecret\": \"[UPDATE-CLIENT-SECRET]\",\n",
        "\"authorizationUri\": \"https://accounts.google.com/o/oauth2/v2/auth?client_id=[UPDATE-CLIENT-ID]&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform&include_granted_scopes=true&response_type=code&access_type=offline&prompt=consent\",\n",
        "\"tokenUri\": \"https://oauth2.googleapis.com/token\"\n",
        "}\n",
        "}'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "b33096f67166"
      },
      "outputs": [],
      "source": [
        "%%bash\n",
        "\n",
        "curl -X POST \\\n",
        "-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n",
        "-H \"Content-Type: application/json\" \\\n",
        "-H \"X-Goog-User-Project: [your-project-number]\" \\\n",
        "\"https://discoveryengine.googleapis.com/v1alpha/projects/[your-project-number]/locations/global/collections/default_collection/engines/[your-gemini-enterprise-engine-id]/assistants/default_assistant/agents\" \\\n",
        "-d '{\n",
        "\"displayName\": \"Gemma Agent v7\",\n",
        "\"description\": \"Gemma Agent to answer questions.\",\n",
        "\"adk_agent_definition\": {\n",
        "\"tool_settings\": {\n",
        "\"tool_description\": \"Gemma Agent to answer questions.\"\n",
        "},\n",
        "\"provisioned_reasoning_engine\": {\n",
        "\"reasoning_engine\": \"projects/[your-project-number]/locations/us-central1/reasoningEngines/[reasoning-engine-id]\"\n",
        "},\n",
        "\"authorizations\": [\"projects/[your-project-number]/locations/global/authorizations/customhr9885\"]\n",
        "}\n",
        "}'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "69085722b50a"
      },
      "source": [
        "#### After agent with MCP server is enabled, you can interact with agent on Gemini Enterprise. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2d990f30c874"
      },
      "source": [
        "![oss_model_with_gemini_enterprise](https://services.google.com/fh/files/misc/oss_model_with_agentspace_v2.png)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "oss_model_with_gemini_enterprise.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
