{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Comet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://user-images.githubusercontent.com/7529846/230328046-a8b18c51-12e3-4617-9b39-97614a571a2d.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with [Comet](https://www.comet.com/site/?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook).  \n",
    "\n",
    "<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/comet_tracking.html\">\n",
    "  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
    "</a>\n",
    "\n",
    "**Example Project:** [Comet with LangChain](https://www.comet.com/examples/comet-example-langchain/view/b5ZThK6OFdhKWVSP3fDfRtrNF/panels?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://user-images.githubusercontent.com/7529846/230326720-a9711435-9c6f-4edb-a707-94b67271ab25.png)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Install Comet and Dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install comet_ml langchain openai google-search-results spacy textstat pandas\n",
    "\n",
    "import sys\n",
    "\n",
    "!{sys.executable} -m spacy download en_core_web_sm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Initialize Comet and Set your Credentials"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can grab your [Comet API Key here](https://www.comet.com/signup?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook) or click the link after initializing Comet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import comet_ml\n",
    "\n",
    "comet_ml.init(project_name=\"comet-example-langchain\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Set OpenAI and SerpAPI credentials"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You will need an [OpenAI API Key](https://platform.openai.com/account/api-keys) and a [SerpAPI API Key](https://serpapi.com/dashboard) to run the following examples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"...\"\n",
    "# os.environ[\"OPENAI_ORGANIZATION\"] = \"...\"\n",
    "os.environ[\"SERPAPI_API_KEY\"] = \"...\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Scenario 1: Using just an LLM"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datetime import datetime\n",
    "\n",
    "from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
    "from langchain.llms import OpenAI\n",
    "\n",
    "comet_callback = CometCallbackHandler(\n",
    "    project_name=\"comet-example-langchain\",\n",
    "    complexity_metrics=True,\n",
    "    stream_logs=True,\n",
    "    tags=[\"llm\"],\n",
    "    visualizations=[\"dep\"],\n",
    ")\n",
    "callbacks = [StdOutCallbackHandler(), comet_callback]\n",
    "llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True)\n",
    "\n",
    "llm_result = llm.generate([\"Tell me a joke\", \"Tell me a poem\", \"Tell me a fact\"] * 3)\n",
    "print(\"LLM result\", llm_result)\n",
    "comet_callback.flush_tracker(llm, finish=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Scenario 2: Using an LLM in a Chain"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
    "from langchain.chains import LLMChain\n",
    "from langchain.llms import OpenAI\n",
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "comet_callback = CometCallbackHandler(\n",
    "    complexity_metrics=True,\n",
    "    project_name=\"comet-example-langchain\",\n",
    "    stream_logs=True,\n",
    "    tags=[\"synopsis-chain\"],\n",
    ")\n",
    "callbacks = [StdOutCallbackHandler(), comet_callback]\n",
    "llm = OpenAI(temperature=0.9, callbacks=callbacks)\n",
    "\n",
    "template = \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\n",
    "Title: {title}\n",
    "Playwright: This is a synopsis for the above play:\"\"\"\n",
    "prompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\n",
    "synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)\n",
    "\n",
    "test_prompts = [{\"title\": \"Documentary about Bigfoot in Paris\"}]\n",
    "print(synopsis_chain.apply(test_prompts))\n",
    "comet_callback.flush_tracker(synopsis_chain, finish=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Scenario 3: Using An Agent with Tools "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.agents import initialize_agent, load_tools\n",
    "from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
    "from langchain.llms import OpenAI\n",
    "\n",
    "comet_callback = CometCallbackHandler(\n",
    "    project_name=\"comet-example-langchain\",\n",
    "    complexity_metrics=True,\n",
    "    stream_logs=True,\n",
    "    tags=[\"agent\"],\n",
    ")\n",
    "callbacks = [StdOutCallbackHandler(), comet_callback]\n",
    "llm = OpenAI(temperature=0.9, callbacks=callbacks)\n",
    "\n",
    "tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callbacks=callbacks)\n",
    "agent = initialize_agent(\n",
    "    tools,\n",
    "    llm,\n",
    "    agent=\"zero-shot-react-description\",\n",
    "    callbacks=callbacks,\n",
    "    verbose=True,\n",
    ")\n",
    "agent.run(\n",
    "    \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
    ")\n",
    "comet_callback.flush_tracker(agent, finish=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Scenario 4: Using Custom Evaluation Metrics"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `CometCallbackManager` also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Let's take a look at how this works. \n",
    "\n",
    "\n",
    "In the snippet below, we will use the [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) metric to evaluate the quality of a generated summary of an input prompt. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install rouge-score"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from rouge_score import rouge_scorer\n",
    "\n",
    "from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
    "from langchain.chains import LLMChain\n",
    "from langchain.llms import OpenAI\n",
    "from langchain.prompts import PromptTemplate\n",
    "\n",
    "\n",
    "class Rouge:\n",
    "    def __init__(self, reference):\n",
    "        self.reference = reference\n",
    "        self.scorer = rouge_scorer.RougeScorer([\"rougeLsum\"], use_stemmer=True)\n",
    "\n",
    "    def compute_metric(self, generation, prompt_idx, gen_idx):\n",
    "        prediction = generation.text\n",
    "        results = self.scorer.score(target=self.reference, prediction=prediction)\n",
    "\n",
    "        return {\n",
    "            \"rougeLsum_score\": results[\"rougeLsum\"].fmeasure,\n",
    "            \"reference\": self.reference,\n",
    "        }\n",
    "\n",
    "\n",
    "reference = \"\"\"\n",
    "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.\n",
    "It was the first structure to reach a height of 300 metres.\n",
    "\n",
    "It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)\n",
    "Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France .\n",
    "\"\"\"\n",
    "rouge_score = Rouge(reference=reference)\n",
    "\n",
    "template = \"\"\"Given the following article, it is your job to write a summary.\n",
    "Article:\n",
    "{article}\n",
    "Summary: This is the summary for the above article:\"\"\"\n",
    "prompt_template = PromptTemplate(input_variables=[\"article\"], template=template)\n",
    "\n",
    "comet_callback = CometCallbackHandler(\n",
    "    project_name=\"comet-example-langchain\",\n",
    "    complexity_metrics=False,\n",
    "    stream_logs=True,\n",
    "    tags=[\"custom_metrics\"],\n",
    "    custom_metrics=rouge_score.compute_metric,\n",
    ")\n",
    "callbacks = [StdOutCallbackHandler(), comet_callback]\n",
    "llm = OpenAI(temperature=0.9)\n",
    "\n",
    "synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)\n",
    "\n",
    "test_prompts = [\n",
    "    {\n",
    "        \"article\": \"\"\"\n",
    "                 The tower is 324 metres (1,063 ft) tall, about the same height as\n",
    "                 an 81-storey building, and the tallest structure in Paris. Its base is square,\n",
    "                 measuring 125 metres (410 ft) on each side.\n",
    "                 During its construction, the Eiffel Tower surpassed the\n",
    "                 Washington Monument to become the tallest man-made structure in the world,\n",
    "                 a title it held for 41 years until the Chrysler Building\n",
    "                 in New York City was finished in 1930.\n",
    "\n",
    "                 It was the first structure to reach a height of 300 metres.\n",
    "                 Due to the addition of a broadcasting aerial at the top of the tower in 1957,\n",
    "                 it is now taller than the Chrysler Building by 5.2 metres (17 ft).\n",
    "\n",
    "                 Excluding transmitters, the Eiffel Tower is the second tallest\n",
    "                 free-standing structure in France after the Millau Viaduct.\n",
    "                 \"\"\"\n",
    "    }\n",
    "]\n",
    "print(synopsis_chain.apply(test_prompts, callbacks=callbacks))\n",
    "comet_callback.flush_tracker(synopsis_chain, finish=True)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
