{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "d1944ce9",
   "metadata": {},
   "source": [
    "<a href=\"https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/llm/helicone.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a1eb5a0",
   "metadata": {},
   "source": [
    "# Helicone AI Gateway"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "41dda51b",
   "metadata": {},
   "source": [
    "Helicone is an OpenAI-compatible AI Gateway that routes requests to many providers with observability, control, and caching. Learn more on the [Helicone docs](https://docs.helicone.ai/) and see available [models](https://www.helicone.ai/models).\n",
    "\n",
    "If you're opening this Notebook on Colab, you'll likely need to install the integration packages below.\n",
    "\n",
    "Notes:\n",
    "- Only your Helicone API key is required (`HELICONE_API_KEY`); no provider keys are needed.\n",
    "- Default base URL is `https://ai-gateway.helicone.ai/v1`. Override with `api_base` or `HELICONE_API_BASE`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0c66c708",
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install llama-index-llms-helicone"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f761f876",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install llama-index"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "25f8c6b8",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.helicone import Helicone\n",
    "from llama_index.core.llms import ChatMessage"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d5ee279",
   "metadata": {},
   "source": [
    "## Call `chat` with ChatMessage List\n",
    "You need to either set env var `HELICONE_API_KEY` or pass `api_key` in the constructor."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "700f56f7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import os\n",
    "# os.environ[\"HELICONE_API_KEY\"] = \"<your-helicone-api-key>\"\n",
    "\n",
    "llm = Helicone(\n",
    "    api_key=\"<your-helicone-api-key>\",  # or set HELICONE_API_KEY\n",
    "    model=\"gpt-4o-mini\",  # routed via the Helicone AI Gateway\n",
    "    max_tokens=256,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8338b656",
   "metadata": {},
   "outputs": [],
   "source": [
    "message = ChatMessage(role=\"user\", content=\"Tell me a joke\")\n",
    "resp = llm.chat([message])\n",
    "print(resp)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f55aeb31",
   "metadata": {},
   "source": [
    "### Streaming"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "73761d13",
   "metadata": {},
   "outputs": [],
   "source": [
    "message = ChatMessage(role=\"user\", content=\"Tell me a story in 200 words\")\n",
    "resp = llm.stream_chat([message])\n",
    "for r in resp:\n",
    "    print(r.delta, end=\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a18dd190",
   "metadata": {},
   "source": [
    "## API Support (Chat only; no legacy Completions)\n",
    "Helicone supports OpenAI-compatible Chat Completions and the newer Responses API. The legacy Completions API is not supported.\n",
    "\n",
    "In LlamaIndex, use `llm.chat(...)` and `llm.stream_chat(...)`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4219f11",
   "metadata": {},
   "source": [
    "## Model Configuration"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "307a934e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Choose any OpenAI-compatible model routed by Helicone.\n",
    "# See https://www.helicone.ai/models for options.\n",
    "\n",
    "# If HELICONE_API_KEY is set in your environment, you can omit api_key here.\n",
    "llm = Helicone(model=\"gpt-4o-mini\")\n",
    "message = ChatMessage(\n",
    "    role=\"user\", content=\"Write one sentence about Rust dragons coding.\"\n",
    ")\n",
    "resp = llm.chat([message])\n",
    "print(resp)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
