{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Heroku LLM Managed Inference Embedding\n",
    "\n",
    "The `llama-index-embeddings-heroku` package contains LlamaIndex integrations for building applications with embeddings models on Heroku's Managed Inference platform. This integration allows you to easily connect to and use AI models deployed on Heroku's infrastructure."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Installation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install llama-index-embeddings-heroku"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "### 1. Create a Heroku App\n",
    "\n",
    "First, create an app in Heroku:\n",
    "\n",
    "```bash\n",
    "heroku create $APP_NAME\n",
    "```\n",
    "\n",
    "### 2. Create and Attach AI Models\n",
    "\n",
    "Create and attach a chat model to your app:\n",
    "\n",
    "```bash\n",
    "heroku ai:models:create -a $APP_NAME cohere-embed-multilingual --as EMBEDDING\n",
    "```\n",
    "\n",
    "### 3. Export Configuration Variables\n",
    "\n",
    "Export the required configuration variables:\n",
    "\n",
    "```bash\n",
    "export EMBEDDING_KEY=$(heroku config:get EMBEDDING_KEY -a $APP_NAME)\n",
    "export EMBEDDING_MODEL_ID=$(heroku config:get EMBEDDING_MODEL_ID -a $APP_NAME)\n",
    "export EMBEDDING_URL=$(heroku config:get EMBEDDING_URL -a $APP_NAME)\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Usage\n",
    "\n",
    "### Basic Usage"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Initialize the Heroku LLM\n",
    "from llama_index.embeddings.heroku import HerokuEmbedding\n",
    "\n",
    "# Initialize the Heroku Embedding\n",
    "embedding_model = HerokuEmbedding()\n",
    "\n",
    "# Get a single embedding\n",
    "embedding = embedding_model.get_text_embedding(\"Hello, world!\")\n",
    "print(f\"Embedding dimension: {len(embedding)}\")\n",
    "\n",
    "# Get embeddings for multiple texts\n",
    "texts = [\"Hello\", \"world\", \"from\", \"Heroku\"]\n",
    "embeddings = embedding_model.get_text_embedding_batch(texts)\n",
    "print(f\"Number of embeddings: {len(embeddings)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using Environment Variables\n",
    "\n",
    "The integration automatically reads from environment variables:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "# Set environment variables\n",
    "os.environ[\"EMBEDDING_KEY\"] = \"your-embedding-key\"\n",
    "os.environ[\"EMBEDDING_URL\"] = \"https://us.inference.heroku.com\"\n",
    "os.environ[\"EMBEDDING_MODEL_ID\"] = \"claude-3-5-haiku\"\n",
    "\n",
    "# Initialize without parameters\n",
    "llm = HerokuEmbedding()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using Parameters\n",
    "\n",
    "You can also pass parameters directly:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from llama_index.embeddings.heroku import HerokuEmbedding\n",
    "\n",
    "embedding_model = HerokuEmbedding(\n",
    "    model=os.getenv(\"EMBEDDING_MODEL_ID\", \"cohere-embed-multilingual\"),\n",
    "    api_key=os.getenv(\"EMBEDDING_KEY\", \"your-embedding-key\"),\n",
    "    base_url=os.getenv(\"EMBEDDING_URL\", \"https://us.inference.heroku.com\"),\n",
    "    timeout=60.0,\n",
    ")\n",
    "\n",
    "print(embedding_model.get_text_embedding(\"Hello Heroku!\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Available Models\n",
    "\n",
    "For a complete list of available models, see the [Heroku Managed Inference documentation](https://devcenter.heroku.com/articles/heroku-inference#available-models).\n",
    "\n",
    "## Error Handling\n",
    "\n",
    "The integration includes proper error handling for common issues:\n",
    "\n",
    "- Missing API key\n",
    "- Invalid inference URL\n",
    "- Missing model configuration\n",
    "\n",
    "## Additional Information\n",
    "\n",
    "For more information about Heroku Managed Inference, visit the [official documentation](https://devcenter.heroku.com/articles/heroku-inference)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
