{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Heroku LLM Managed Inference\n",
    "\n",
    "The `llama-index-llms-heroku` package contains LlamaIndex integrations for building applications with models on Heroku's Managed Inference platform. This integration allows you to easily connect to and use AI models deployed on Heroku's infrastructure."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Installation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install llama-index-llms-heroku"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "### 1. Create a Heroku App\n",
    "\n",
    "First, create an app in Heroku:\n",
    "\n",
    "```bash\n",
    "heroku create $APP_NAME\n",
    "```\n",
    "\n",
    "### 2. Create and Attach AI Models\n",
    "\n",
    "Create and attach a chat model to your app:\n",
    "\n",
    "```bash\n",
    "heroku ai:models:create -a $APP_NAME claude-3-5-haiku\n",
    "```\n",
    "\n",
    "### 3. Export Configuration Variables\n",
    "\n",
    "Export the required configuration variables:\n",
    "\n",
    "```bash\n",
    "export INFERENCE_KEY=$(heroku config:get INFERENCE_KEY -a $APP_NAME)\n",
    "export INFERENCE_MODEL_ID=$(heroku config:get INFERENCE_MODEL_ID -a $APP_NAME)\n",
    "export INFERENCE_URL=$(heroku config:get INFERENCE_URL -a $APP_NAME)\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Usage\n",
    "\n",
    "### Basic Usage"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.heroku import Heroku\n",
    "from llama_index.core.llms import ChatMessage, MessageRole\n",
    "\n",
    "# Initialize the Heroku LLM\n",
    "llm = Heroku()\n",
    "\n",
    "# Create chat messages\n",
    "messages = [\n",
    "    ChatMessage(\n",
    "        role=MessageRole.SYSTEM, content=\"You are a helpful assistant.\"\n",
    "    ),\n",
    "    ChatMessage(\n",
    "        role=MessageRole.USER,\n",
    "        content=\"What are the most popular house pets in North America?\",\n",
    "    ),\n",
    "]\n",
    "\n",
    "# Get response\n",
    "response = llm.chat(messages)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using Environment Variables\n",
    "\n",
    "The integration automatically reads from environment variables:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "# Set environment variables\n",
    "os.environ[\"INFERENCE_KEY\"] = \"your-inference-key\"\n",
    "os.environ[\"INFERENCE_URL\"] = \"https://us.inference.heroku.com\"\n",
    "os.environ[\"INFERENCE_MODEL_ID\"] = \"claude-3-5-haiku\"\n",
    "\n",
    "# Initialize without parameters\n",
    "llm = Heroku()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using Parameters\n",
    "\n",
    "You can also pass parameters directly:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "llm = Heroku(\n",
    "    model=os.getenv(\"INFERENCE_MODEL_ID\", \"claude-3-5-haiku\"),\n",
    "    api_key=os.getenv(\"INFERENCE_KEY\", \"your-inference-key\"),\n",
    "    inference_url=os.getenv(\n",
    "        \"INFERENCE_URL\", \"https://us.inference.heroku.com\"\n",
    "    ),\n",
    "    max_tokens=1024,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Available Models\n",
    "\n",
    "For a complete list of available models, see the [Heroku Managed Inference documentation](https://devcenter.heroku.com/articles/heroku-inference#available-models).\n",
    "\n",
    "## Error Handling\n",
    "\n",
    "The integration includes proper error handling for common issues:\n",
    "\n",
    "- Missing API key\n",
    "- Invalid inference URL\n",
    "- Missing model configuration\n",
    "\n",
    "## Additional Information\n",
    "\n",
    "For more information about Heroku Managed Inference, visit the [official documentation](https://devcenter.heroku.com/articles/heroku-inference)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
