{
  "cells": [
    {
      "cell_type": "markdown",
      "id": "635d8ebb",
      "metadata": {},
      "source": [
        "# LLM-as-Judge\n",
        "\n",
        "- Author: [Sunyoung Park (architectyou)](https://github.com/architectyou)\n",
        "- Design: \n",
        "- Peer Review: \n",
        "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n",
        "\n",
        "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/99-TEMPLATE/00-BASE-TEMPLATE-EXAMPLE.ipynb) [![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/99-TEMPLATE/00-BASE-TEMPLATE-EXAMPLE.ipynb)\n",
        "\n",
        "## Overview\n",
        "\n",
        "**LLM-as-a-judge** is one of the methods for evaluating and improving large language models, where an LLM evaluates the outputs of other models, similar to human evaluation.\n",
        "LLMs are considered difficult to evaluate because they can do more than simply selecting correct answers.\n",
        "\n",
        "Therefore, to evaluate such capabilities, although still imperfect, using a second LLM as an evaluator - that is, **LLM-as-a-Judge** - is expected to be effective.\n",
        "\n",
        "Typically, models that are larger and better than the ones used in specific LLM applications are used as evaluation models.\n",
        "In this tutorial, we will explore the **Off-the-shelf Evaluators** provided by LangSmith.\n",
        "**Off-the-shelf Evaluators** refer to pre-defined prompt-based LLM evaluators.\n",
        "While they are easy to use, custom evaluators need to be defined to use more extended features. Basically, evaluation is performed by passing the following three pieces of information to the LLM Evaluator:\n",
        "\n",
        "- `input`: Question defined in the dataset\n",
        "- `prediction`: Answer generated by LLM\n",
        "- `reference`: Answer defined in the dataset\n",
        "\n",
        "### Table of Contents\n",
        "\n",
        "- [Overview](#overview)\n",
        "- [Environment Setup](#environment-setup)\n",
        "- [Define functions for RAG performance testing](#define-functions-for-rag-performance-testing)\n",
        "- [Question-Answer Evaluator](#question-answer-evaluator)\n",
        "- [Context-based Answer Evaluator](#context-based-answer-evaluator)\n",
        "- [Criteria](#criteria)\n",
        "- [Use of Evaluator when correct answers exist(labeled_criteria)](#use-of-evaluator-when-correct-answers-exist(labeled_criteria))\n",
        "- [Custom function Evaluator](#custom-function-evaluator)\n",
        "\n",
        "### References\n",
        "\n",
        "- [A Survey on LLM-as-a-Judge](https://arxiv.org/abs/2411.15594)\n",
        "- [LangSmith LLM-as-judge](https://docs.smith.langchain.com/evaluation/concepts#llm-as-judge)\n",
        "- [LangSmith How to define an LLM-as-a-judge evaluator](https://docs.smith.langchain.com/evaluation/how_to_guides/llm_as_judge)\n",
        "- [LangSmith How to use off-the-shelf evaluators(Python Only)](https://docs.smith.langchain.com/evaluation/how_to_guides/use_langchain_off_the_shelf_evaluators)\n",
        "----"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c6c7aba4",
      "metadata": {},
      "source": [
        "## Environment Setup\n",
        "\n",
        "Setting up your environment is the first step. See the [Environment Setup](https://wikidocs.net/257836) guide for more details.\n",
        "\n",
        "\n",
        "**[Note]**\n",
        "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n",
        "- You can checkout the [`langchain-opentutorial`](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "id": "21943adb",
      "metadata": {},
      "outputs": [],
      "source": [
        "%%capture --no-stderr\n",
        "%pip install langchain-opentutorial"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "id": "f25ec196",
      "metadata": {},
      "outputs": [],
      "source": [
        "# Install required packages\n",
        "from langchain_opentutorial import package\n",
        "\n",
        "package.install(\n",
        "    [\n",
        "        \"langsmith\",\n",
        "        \"langchain_openai\",\n",
        "        \"pymupdf\",\n",
        "        \"faiss-cpu\" #if gpu is available, use \"faiss-gpu\"\n",
        "    ],\n",
        "    verbose=False,\n",
        "    upgrade=False,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "690a9ae0",
      "metadata": {},
      "source": [
        "You can set API keys in a `.env` file or set them manually.\n",
        "\n",
        "[Note] If you’re not using the `.env` file, no worries! Just enter the keys directly in the cell below, and you’re good to go."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "id": "327c2c7c",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Environment variables have been set successfully.\n"
          ]
        }
      ],
      "source": [
        "# Set environment variables\n",
        "from langchain_opentutorial import set_env\n",
        "\n",
        "set_env(\n",
        "    {\n",
        "        \"OPENAI_API_KEY\": \"\",\n",
        "        \"LANGCHAIN_API_KEY\": \"\",\n",
        "        \"LANGCHAIN_TRACING_V2\": \"true\",\n",
        "        \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n",
        "        \"LANGCHAIN_PROJECT\": \"LLM-as-Judge\",\n",
        "    }\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "id": "733a07e8",
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "True"
            ]
          },
          "execution_count": 4,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# Load API keys from .env file\n",
        "from dotenv import load_dotenv\n",
        "\n",
        "load_dotenv(override=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "aa00c3f4",
      "metadata": {},
      "source": [
        "## Define functions for RAG performance testing\n",
        "Create RAG system to use for testing"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "id": "6b271138",
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "'The authors are Julia Wiesinger, Patrick Marlow, and Vladimir Vuskovic.'"
            ]
          },
          "execution_count": 5,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from myrag import PDFRAG\n",
        "from langchain_openai import ChatOpenAI\n",
        "\n",
        "# Create PDFRAG object\n",
        "rag = PDFRAG(\n",
        "    \"data/Newwhitepaper_Agents2.pdf\",\n",
        "    ChatOpenAI(model=\"gpt-4o-mini\", temperature=0),\n",
        ")\n",
        "\n",
        "# Create Retriever\n",
        "retriever = rag.create_retriever()\n",
        "\n",
        "# Create Chain\n",
        "chain = rag.create_chain(retriever)\n",
        "\n",
        "# Generate answer for question\n",
        "chain.invoke(\"List up the name of the authors\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "fe9fd410",
      "metadata": {},
      "source": [
        "Create function as `ask_question` . This function takes a dictionary called `inputs` as input and returns a dictionary with `answer` as output."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "id": "69cb77da",
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create function to answer question\n",
        "def ask_question(inputs: dict):\n",
        "    return {\"answer\": chain.invoke(inputs[\"question\"])}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "id": "77f127b9",
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'answer': 'The authors are Julia Wiesinger, Patrick Marlow, and Vladimir Vuskovic.'}"
            ]
          },
          "execution_count": 7,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# The example of Uer Quesiton\n",
        "llm_answer = ask_question(\n",
        "    {\"question\": \"List up the name of the authors\"}\n",
        ")\n",
        "llm_answer"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "1ce0d658",
      "metadata": {},
      "source": [
        "Defining a function for evaluator prompt output."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "id": "b350cf06",
      "metadata": {},
      "outputs": [],
      "source": [
        "# The function for evaluator prompt output\n",
        "def print_evaluator_prompt(evaluator):\n",
        "    return evaluator.evaluator.prompt.pretty_print()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "8070eda6",
      "metadata": {},
      "source": [
        "## Question-Answer Evaluator\n",
        "\n",
        "This is the most basic Evaluator that accesses questions(Query) and answers(Answer).\n",
        "\n",
        "User input is defined as `input`, LLM-generated response as `prediction`, and the correct answer as `reference`.\n",
        "\n",
        "However, in the Prompt variables, they are defiend as `query`, `result`, and `answer`.\n",
        "- `query` : User input\n",
        "- `result` : LLM-generated response\n",
        "- `answer` : Correct answer"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "id": "dd633e0b",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "You are a teacher grading a quiz.\n",
            "You are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.\n",
            "\n",
            "Example Format:\n",
            "QUESTION: question here\n",
            "STUDENT ANSWER: student's answer here\n",
            "TRUE ANSWER: true answer here\n",
            "GRADE: CORRECT or INCORRECT here\n",
            "\n",
            "Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \n",
            "\n",
            "QUESTION: \u001b[33;1m\u001b[1;3m{query}\u001b[0m\n",
            "STUDENT ANSWER: \u001b[33;1m\u001b[1;3m{result}\u001b[0m\n",
            "TRUE ANSWER: \u001b[33;1m\u001b[1;3m{answer}\u001b[0m\n",
            "GRADE:\n"
          ]
        }
      ],
      "source": [
        "from langsmith.evaluation import evaluate, LangChainStringEvaluator\n",
        "\n",
        "# Create Question-Answer Evaluator\n",
        "qa_evalulator = LangChainStringEvaluator(\"qa\")\n",
        "\n",
        "# Print the prompt\n",
        "print_evaluator_prompt(qa_evalulator)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "id": "af019fa7",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "View the evaluation results for experiment: 'RAG_EVAL-899d197c' at:\n",
            "https://smith.langchain.com/o/9089d1d3-e786-4000-8468-66153f05444b/datasets/9b4ca107-33fe-4c71-bb7f-488272d895a3/compare?selectedSessions=66642322-32fe-4d30-9275-cca885c98205\n",
            "\n",
            "\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "6b6336ae1b0d4beea8a6c99244b58d0d",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "0it [00:00, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "# Set the dataset name\n",
        "dataset_name = \"RAG_EVAL_DATASET\"\n",
        "\n",
        "# Execute evaluation\n",
        "experiment_results = evaluate(\n",
        "    ask_question,\n",
        "    data=dataset_name,\n",
        "    evaluators=[qa_evalulator],\n",
        "    experiment_prefix=\"RAG_EVAL\",\n",
        "    # Specify experiment metadata\n",
        "    metadata={\n",
        "        \"variant\": \"Evaluation with QA Evaluator\",\n",
        "    },\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "13b77c54",
      "metadata": {},
      "source": [
        "![rag-eval](./assets/05-langsmith-llm-as-judge-01.png)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6c80e03e",
      "metadata": {},
      "source": [
        "## Context-based Answer Evaluator\n",
        "\n",
        "- `LangChainStringEvaluator(\"context_qa\")`: Instructs the LLM chain to use reference \"context\" for determining accuracy.\n",
        "- `LangChainStringEvaluator(\"cot_qa\")`: \"cot_qa\" is similar to the \"context_qa\" evaluator, but differs in that it instructs the LLM to use 'chain-of-thought' reasoning before making a final judgment.\n",
        "\n",
        "[Note]\n",
        "\n",
        "First, you need to define a function that returns Context: `context_answer_rag_answer`\n",
        "Then, create a LangChainStringEvaluator. During creation, properly map the return values of the previously defined function through prepare_data.\n",
        "\n",
        "[Details]\n",
        "- `run` : Results generated by LLM (context, answer, input)\n",
        "- `example` : Data defined in the dataset (question and answer)\n",
        "\n",
        "The LangChainStringEvaluator needs the following three pieces of information to perform evaluation:\n",
        "\n",
        "- `prediction` : Answer generated by LLM\n",
        "- `reference` : Answer defined in the dataset\n",
        "- `input` : Question defined in the dataset\n",
        "\n",
        "However, since `LangChainStringEvaluator(\"context_qa\")` uses `reference` as Context, it is defined differently.\n",
        "(Note) Below, we defined a function that returns `context` , `answer` , and `question` to utilize the `context_qa` evaluator."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "id": "89875cee",
      "metadata": {},
      "outputs": [],
      "source": [
        "# Define Context-based RAG Response function\n",
        "def context_answer_rag_answer(inputs: dict):\n",
        "    context = retriever.invoke(inputs[\"question\"])\n",
        "    return {\n",
        "        \"context\": \"\\n\".join([doc.page_content for doc in context]),\n",
        "        \"answer\": chain.invoke(inputs[\"question\"]),\n",
        "        \"query\": inputs[\"question\"],\n",
        "    }"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "id": "ce6fe2aa",
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'context': 'Agents\\nAuthors: Julia Wiesinger, Patrick Marlow \\nand Vladimir Vuskovic\\nAgents\\n2\\nSeptember 2024\\nAcknowledgements\\nReviewers and Contributors\\nEvan Huang\\nEmily Xue\\nOlcan Sercinoglu\\nSebastian Riedel\\nSatinder Baveja\\nAntonio Gulli\\nAnant Nawalgaria\\nCurators and Editors\\nAntonio Gulli\\nAnant Nawalgaria\\nGrace Mollison \\nTechnical Writer\\nJoey Haymaker\\nDesigner\\nMichael Lanning\\n38\\nSummary\\x08\\n40\\nEndnotes\\x08\\n42\\nTable of contents\\nAgents\\n22\\nSeptember 2024\\nUnset\\nfunction_call {\\n  name: \"display_cities\"\\n  args: {\\n    \"cities\": [\"Crested Butte\", \"Whistler\", \"Zermatt\"],\\n    \"preferences\": \"skiing\"\\n    }\\n}\\nSnippet 5. Sample Function Call payload for displaying a list of cities and user preferences',\n",
              " 'answer': 'The authors are Julia Wiesinger, Patrick Marlow, and Vladimir Vuskovic.',\n",
              " 'query': 'List up the name of the authors'}"
            ]
          },
          "execution_count": 12,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# Execute the function\n",
        "context_answer_rag_answer(\n",
        "    {\"question\": \"List up the name of the authors\"}\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "id": "1c50cdf4",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "COT_QA Evaluator Prompt\n",
            "You are a teacher grading a quiz.\n",
            "You are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.\n",
            "Write out in a step by step manner your reasoning to be sure that your conclusion is correct. Avoid simply stating the correct answer at the outset.\n",
            "\n",
            "Example Format:\n",
            "QUESTION: question here\n",
            "CONTEXT: context the question is about here\n",
            "STUDENT ANSWER: student's answer here\n",
            "EXPLANATION: step by step reasoning here\n",
            "GRADE: CORRECT or INCORRECT here\n",
            "\n",
            "Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \n",
            "\n",
            "QUESTION: \u001b[33;1m\u001b[1;3m{query}\u001b[0m\n",
            "CONTEXT: \u001b[33;1m\u001b[1;3m{context}\u001b[0m\n",
            "STUDENT ANSWER: \u001b[33;1m\u001b[1;3m{result}\u001b[0m\n",
            "EXPLANATION:\n",
            "Context_QA Evaluator Prompt\n",
            "You are a teacher grading a quiz.\n",
            "You are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.\n",
            "\n",
            "Example Format:\n",
            "QUESTION: question here\n",
            "CONTEXT: context the question is about here\n",
            "STUDENT ANSWER: student's answer here\n",
            "GRADE: CORRECT or INCORRECT here\n",
            "\n",
            "Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \n",
            "\n",
            "QUESTION: \u001b[33;1m\u001b[1;3m{query}\u001b[0m\n",
            "CONTEXT: \u001b[33;1m\u001b[1;3m{context}\u001b[0m\n",
            "STUDENT ANSWER: \u001b[33;1m\u001b[1;3m{result}\u001b[0m\n",
            "GRADE:\n"
          ]
        }
      ],
      "source": [
        "# Create an cot_qa Evaluator\n",
        "cot_qa_evaluator = LangChainStringEvaluator(\n",
        "    \"cot_qa\",\n",
        "    prepare_data=lambda run, example: {\n",
        "        \"prediction\": run.outputs[\"answer\"],  # Generated answer by LLM\n",
        "        \"reference\": run.outputs[\"context\"],  # Context\n",
        "        \"input\": example.inputs[\"question\"],  # Question defined in the dataset\n",
        "    },\n",
        ")\n",
        "\n",
        "# Create an context_qa Evaluator\n",
        "context_qa_evaluator = LangChainStringEvaluator(\n",
        "    \"context_qa\",\n",
        "    prepare_data=lambda run, example: {\n",
        "        \"prediction\": run.outputs[\"answer\"],  # Generated answer by LLM\n",
        "        \"reference\": run.outputs[\"context\"],  # Context\n",
        "        \"input\": example.inputs[\"question\"],  # Question defined in the dataset\n",
        "    },\n",
        ")\n",
        "\n",
        "# Print evaluator prompt output\n",
        "print(\"COT_QA Evaluator Prompt\")\n",
        "print_evaluator_prompt(cot_qa_evaluator)\n",
        "print(\"Context_QA Evaluator Prompt\")\n",
        "print_evaluator_prompt(context_qa_evaluator)\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "7fecfae8",
      "metadata": {},
      "source": [
        "Execute the Evaluation, and Check the result that returned."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "id": "89c0fb0e",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "View the evaluation results for experiment: 'RAG_EVAL-8087cb7d' at:\n",
            "https://smith.langchain.com/o/9089d1d3-e786-4000-8468-66153f05444b/datasets/9b4ca107-33fe-4c71-bb7f-488272d895a3/compare?selectedSessions=80980c43-0edd-4483-a4a7-18eb2bf81d3b\n",
            "\n",
            "\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "7f2293aec0b6479a867371908c64ae19",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "0it [00:00, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>inputs.question</th>\n",
              "      <th>outputs.context</th>\n",
              "      <th>outputs.answer</th>\n",
              "      <th>outputs.query</th>\n",
              "      <th>error</th>\n",
              "      <th>reference.answer</th>\n",
              "      <th>feedback.COT Contextual Accuracy</th>\n",
              "      <th>feedback.Contextual Accuracy</th>\n",
              "      <th>execution_time</th>\n",
              "      <th>example_id</th>\n",
              "      <th>id</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>What are the three targeted learnings to enhan...</td>\n",
              "      <td>Agents\\n33\\nSeptember 2024\\nEnhancing model pe...</td>\n",
              "      <td>The three targeted learnings to enhance model ...</td>\n",
              "      <td>What are the three targeted learnings to enhan...</td>\n",
              "      <td>None</td>\n",
              "      <td>The three targeted learning approaches to enha...</td>\n",
              "      <td>0</td>\n",
              "      <td>0</td>\n",
              "      <td>2.606171</td>\n",
              "      <td>0e661de4-636b-425d-8f6e-0a52b8070576</td>\n",
              "      <td>a3c6714d-8f28-4a82-93b4-d4f260af54ae</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>What are the key functions of an agent's orche...</td>\n",
              "      <td>implementation of the agent orchestration laye...</td>\n",
              "      <td>The key functions of an agent's orchestration ...</td>\n",
              "      <td>What are the key functions of an agent's orche...</td>\n",
              "      <td>None</td>\n",
              "      <td>The key functions of an agent's orchestration ...</td>\n",
              "      <td>1</td>\n",
              "      <td>1</td>\n",
              "      <td>4.474181</td>\n",
              "      <td>3561c6fe-6ed4-4182-989a-270dcd635f32</td>\n",
              "      <td>180daa5e-4279-47ac-9150-d19ab5eb94cb</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>List up the name of the authors</td>\n",
              "      <td>Agents\\nAuthors: Julia Wiesinger, Patrick Marl...</td>\n",
              "      <td>The authors are Julia Wiesinger, Patrick Marlo...</td>\n",
              "      <td>List up the name of the authors</td>\n",
              "      <td>None</td>\n",
              "      <td>The authors are Julia Wiesinger, Patrick Marlo...</td>\n",
              "      <td>1</td>\n",
              "      <td>1</td>\n",
              "      <td>1.298198</td>\n",
              "      <td>b03e98d1-44ad-4142-8dfa-7b0a31a57096</td>\n",
              "      <td>d8eed689-7e42-4897-936b-b3628ee5632c</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>What is Tree-of-thoughts?</td>\n",
              "      <td>weaknesses depending on the specific applicati...</td>\n",
              "      <td>Tree-of-thoughts (ToT) is a prompt engineering...</td>\n",
              "      <td>What is Tree-of-thoughts?</td>\n",
              "      <td>None</td>\n",
              "      <td>Tree-of-thoughts (ToT) is a prompt engineering...</td>\n",
              "      <td>1</td>\n",
              "      <td>1</td>\n",
              "      <td>2.477597</td>\n",
              "      <td>be18ec98-ab18-4f30-9205-e75f1cb70844</td>\n",
              "      <td>ef6126c1-cba0-4cfd-9725-628dc5a861e4</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>What is the framework used for reasoning and p...</td>\n",
              "      <td>reasoning frameworks (CoT, ReAct, etc.) to \\nf...</td>\n",
              "      <td>The frameworks used for reasoning and planning...</td>\n",
              "      <td>What is the framework used for reasoning and p...</td>\n",
              "      <td>None</td>\n",
              "      <td>The frameworks used for reasoning and planning...</td>\n",
              "      <td>1</td>\n",
              "      <td>1</td>\n",
              "      <td>2.092742</td>\n",
              "      <td>eb4b29a7-511c-4f78-a08f-2d5afeb84320</td>\n",
              "      <td>ed7c7dba-8102-49e7-ad0b-f118f79d7e6f</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>5</th>\n",
              "      <td>How do agents differ from standalone language ...</td>\n",
              "      <td>1.\\t Agents extend the capabilities of languag...</td>\n",
              "      <td>Agents differ from standalone language models ...</td>\n",
              "      <td>How do agents differ from standalone language ...</td>\n",
              "      <td>None</td>\n",
              "      <td>Agents can use tools to access real-time data ...</td>\n",
              "      <td>1</td>\n",
              "      <td>1</td>\n",
              "      <td>6.000040</td>\n",
              "      <td>f4a5a0cf-2d2e-4e15-838a-bc8296eb708b</td>\n",
              "      <td>9f190340-10c5-4af2-be25-c35bf5b7f29a</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "<ExperimentResults RAG_EVAL-8087cb7d>"
            ]
          },
          "execution_count": 14,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# Set the dataset name\n",
        "dataset_name = \"RAG_EVAL_DATASET\"\n",
        "\n",
        "# Execute evaluation \n",
        "evaluate(\n",
        "    context_answer_rag_answer,\n",
        "    data=dataset_name,\n",
        "    evaluators=[cot_qa_evaluator, context_qa_evaluator],\n",
        "    experiment_prefix=\"RAG_EVAL\",\n",
        "    metadata={\n",
        "        \"variant\": \"Evaluation with COT_QA & Context_QA Evaluator\",\n",
        "    },\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6901760c",
      "metadata": {},
      "source": [
        "![context-based-eval](./assets/05-langsmith-llm-as-judge-02.png)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "7a6dfaeb",
      "metadata": {},
      "source": [
        "Even if the generated answer doesn't match the **Ground Truth** , it will be evaluated as **CORRECT** if the given `Context` is accurate."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "f7888195",
      "metadata": {},
      "source": [
        "## Criteria\n",
        "\n",
        "When reference labels (correct answers) are unavailable or difficult to obtain, you can use the \"Criteria\" or \"Score\" evaluators to assess runs against a custom set of criteria.\n",
        "\n",
        "This is useful when you want to monitor **high-level semantic aspects** of the model's responses.\n",
        "\n",
        "```python\n",
        "LangChainStringEvaluator(\"criteria\" , config = {\"criteria\" : \"one of the criteria below\"})\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "ddc4b23e",
      "metadata": {},
      "source": [
        "| Criteria | Description |\n",
        "|------|------|\n",
        "| `conciseness` | Evaluates if the answer is concise and simple |\n",
        "| `relevance` | Evaluates if the answer is relevant to the question |\n",
        "| `correctness` | Evaluates if the answer is correct |\n",
        "| `coherence` | Evaluates if the answer is coherent |\n",
        "| `harmfulness` | Evaluates if the answer is harmful or dangerous |\n",
        "| `maliciousness` | Evaluates if the answer is malicious or aggravating |\n",
        "| `helpfulness` | Evaluates if the answer is helpful |\n",
        "| `controversiality` | Evaluates if the answer is controversial |\n",
        "| `misogyny` | Evaluates if the answer is misogynistic |\n",
        "| `criminality` | Evaluates if the answer promotes criminal behavior |"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "id": "13cd3337",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "View the evaluation results for experiment: 'CRITERIA-EVAL-c7ebf8e3' at:\n",
            "https://smith.langchain.com/o/9089d1d3-e786-4000-8468-66153f05444b/datasets/9b4ca107-33fe-4c71-bb7f-488272d895a3/compare?selectedSessions=0a18e29b-60fe-4427-a51f-c52299a18898\n",
            "\n",
            "\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "501ec7748f5446c98fd197d521b5e083",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "0it [00:00, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from langsmith.evaluation import evaluate, LangChainStringEvaluator\n",
        "\n",
        "# Set Evaluator\n",
        "criteria_evaluator = [\n",
        "    LangChainStringEvaluator(\"criteria\", config={\"criteria\": \"conciseness\"}),\n",
        "    LangChainStringEvaluator(\"criteria\", config={\"criteria\": \"misogyny\"}),\n",
        "    LangChainStringEvaluator(\"criteria\", config={\"criteria\": \"criminality\"}),\n",
        "]\n",
        "\n",
        "# Set the name of Dataset\n",
        "dataset_name = \"RAG_EVAL_DATASET\"\n",
        "\n",
        "# Execute Evaluation\n",
        "experiment_results = evaluate(\n",
        "    ask_question,\n",
        "    data=dataset_name,\n",
        "    evaluators=criteria_evaluator,\n",
        "    experiment_prefix=\"CRITERIA-EVAL\",\n",
        "    # Specify experiment metadata\n",
        "    metadata={\n",
        "        \"variant\": \"Evaluation with Criteria Evaluator\",\n",
        "    },\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "baf477db",
      "metadata": {},
      "source": [
        "![criteria-eval](./assets/05-langsmith-llm-as-judge-03.png)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5c3026b9",
      "metadata": {},
      "source": [
        "## Use of Evaluator when correct answers exist(labeled_criteria)\n",
        "\n",
        "When correct answers exist, it's possible to evaluate by comparing the LLM-generated answer with the correct answer.\n",
        "As shown in the example below, pass the correct answer to `reference` and the LLM-generated answer to `prediction`.\n",
        "Such settings are defined through `prepare_data`.\n",
        "Additionally, the LLM used for answer evaluation is defined through llm in the config.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "id": "4b9f050c",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "You are assessing a submitted answer on a given task or input based on a set of criteria. Here is the data:\n",
            "[BEGIN DATA]\n",
            "***\n",
            "[Input]: \u001b[33;1m\u001b[1;3m{input}\u001b[0m\n",
            "***\n",
            "[Submission]: \u001b[33;1m\u001b[1;3m{output}\u001b[0m\n",
            "***\n",
            "[Criteria]: helpfulness: Is this submission helpful to the user, taking into account the correct reference answer?\n",
            "***\n",
            "[Reference]: \u001b[33;1m\u001b[1;3m{reference}\u001b[0m\n",
            "***\n",
            "[END DATA]\n",
            "Does the submission meet the Criteria? First, write out in a step by step manner your reasoning about each criterion to be sure that your conclusion is correct. Avoid simply stating the correct answers at the outset. Then print only the single character \"Y\" or \"N\" (without quotes or punctuation) on its own line corresponding to the correct answer of whether the submission meets all criteria. At the end, repeat just the letter again by itself on a new line.\n"
          ]
        }
      ],
      "source": [
        "from langsmith.evaluation import LangChainStringEvaluator\n",
        "from langchain_openai import ChatOpenAI\n",
        "\n",
        "# Create labeled_criteria Evaluator\n",
        "labeled_criteria_evaluator = LangChainStringEvaluator(\n",
        "    \"labeled_criteria\",\n",
        "    config={\n",
        "        \"criteria\": {\n",
        "            \"helpfulness\": (\n",
        "                \"Is this submission helpful to the user,\"\n",
        "                \" taking into account the correct reference answer?\"\n",
        "            )\n",
        "        },\n",
        "        \"llm\": ChatOpenAI(temperature=0.0, model=\"gpt-4o-mini\"),\n",
        "    },\n",
        "    prepare_data=lambda run, example: {\n",
        "        \"prediction\": run.outputs[\"answer\"],\n",
        "        \"reference\": example.outputs[\"answer\"],  # Correct answer\n",
        "        \"input\": example.inputs[\"question\"],\n",
        "    },\n",
        ")\n",
        "\n",
        "# Print evaluator prompt\n",
        "print_evaluator_prompt(labeled_criteria_evaluator)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "933a4566",
      "metadata": {},
      "source": [
        "Here's the example of evaluating `relevance`.\n",
        "This time, we pass the `context` as the `reference` through `prepare_data` ."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "id": "b3c2584d",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "You are assessing a submitted answer on a given task or input based on a set of criteria. Here is the data:\n",
            "[BEGIN DATA]\n",
            "***\n",
            "[Input]: \u001b[33;1m\u001b[1;3m{input}\u001b[0m\n",
            "***\n",
            "[Submission]: \u001b[33;1m\u001b[1;3m{output}\u001b[0m\n",
            "***\n",
            "[Criteria]: relevance: Is the submission referring to a real quote from the text?\n",
            "***\n",
            "[Reference]: \u001b[33;1m\u001b[1;3m{reference}\u001b[0m\n",
            "***\n",
            "[END DATA]\n",
            "Does the submission meet the Criteria? First, write out in a step by step manner your reasoning about each criterion to be sure that your conclusion is correct. Avoid simply stating the correct answers at the outset. Then print only the single character \"Y\" or \"N\" (without quotes or punctuation) on its own line corresponding to the correct answer of whether the submission meets all criteria. At the end, repeat just the letter again by itself on a new line.\n"
          ]
        }
      ],
      "source": [
        "from langchain_openai import ChatOpenAI\n",
        "\n",
        "relevance_evaluator = LangChainStringEvaluator(\n",
        "    \"labeled_criteria\",\n",
        "    config={\n",
        "        \"criteria\": \"relevance\",\n",
        "        \"llm\": ChatOpenAI(temperature=0.0, model=\"gpt-4o-mini\"),\n",
        "    },\n",
        "    prepare_data=lambda run, example: {\n",
        "        \"prediction\": run.outputs[\"answer\"],\n",
        "        \"reference\": run.outputs[\"context\"],  # Convey the context\n",
        "        \"input\": example.inputs[\"question\"],\n",
        "    },\n",
        ")\n",
        "\n",
        "print_evaluator_prompt(relevance_evaluator)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "dfaef0d2",
      "metadata": {},
      "source": [
        "Execute the Evaluation, and Check the result that returned."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "id": "2b32ac20",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "View the evaluation results for experiment: 'LABELED-EVAL-80ee678c' at:\n",
            "https://smith.langchain.com/o/9089d1d3-e786-4000-8468-66153f05444b/datasets/9b4ca107-33fe-4c71-bb7f-488272d895a3/compare?selectedSessions=36dc1710-9c3a-46ce-b1ab-a209bb8b700d\n",
            "\n",
            "\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "dbdde405f2e6474aa5c5c89fcf8e69c4",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "0it [00:00, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from langsmith.evaluation import evaluate\n",
        "\n",
        "# Set the name of Dataset\n",
        "dataset_name = \"RAG_EVAL_DATASET\"\n",
        "\n",
        "# Execute Evaluation\n",
        "experiment_results = evaluate(\n",
        "    context_answer_rag_answer,\n",
        "    data=dataset_name,\n",
        "    evaluators=[labeled_criteria_evaluator, relevance_evaluator],\n",
        "    experiment_prefix=\"LABELED-EVAL\",\n",
        "    # Specify experiment metadata\n",
        "    metadata={\n",
        "        \"variant\": \"Evaluation with Labeled_criteria Evaluator\",\n",
        "    },\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d38ebb6d",
      "metadata": {},
      "source": [
        "![labeled-eval](./assets/05-langsmith-llm-as-judge-04.png)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d61e61e1",
      "metadata": {},
      "source": [
        "## Custom function Evaluator\n",
        "\n",
        "Here's an example of creating an evaluator that returns scores. You can normalize scores through `normalize_by`. The converted scores are normalized to values between (0 ~ 1).\n",
        "The `accuracy` below is a user-defined criterion. You can define and use appropriate prompts for your needs."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "id": "366aeaf3",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "================================\u001b[1m System Message \u001b[0m================================\n",
            "\n",
            "You are a helpful assistant.\n",
            "\n",
            "================================\u001b[1m Human Message \u001b[0m=================================\n",
            "\n",
            "[Instruction]\n",
            "Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below. \u001b[33;1m\u001b[1;3m{criteria}\u001b[0m[Ground truth]\n",
            "\u001b[33;1m\u001b[1;3m{reference}\u001b[0m\n",
            "Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: \"[[rating]]\", for example: \"Rating: [[5]]\".\n",
            "\n",
            "[Question]\n",
            "\u001b[33;1m\u001b[1;3m{input}\u001b[0m\n",
            "\n",
            "[The Start of Assistant's Answer]\n",
            "\u001b[33;1m\u001b[1;3m{prediction}\u001b[0m\n",
            "[The End of Assistant's Answer]\n"
          ]
        }
      ],
      "source": [
        "from langsmith.evaluation import LangChainStringEvaluator\n",
        "\n",
        "# Create labeled score evaluator\n",
        "labeled_score_evaluator = LangChainStringEvaluator(\n",
        "    \"labeled_score_string\",\n",
        "    config={\n",
        "        \"criteria\": {\n",
        "            \"accuracy\": \"How accurate is this prediction compared to the reference on a scale of 1-10?\"\n",
        "        },\n",
        "        \"normalize_by\": 10,\n",
        "        \"llm\": ChatOpenAI(temperature=0.0, model=\"gpt-4o-mini\"),\n",
        "    },\n",
        "    prepare_data=lambda run, example: {\n",
        "        \"prediction\": run.outputs[\"answer\"],\n",
        "        \"reference\": example.outputs[\"answer\"],\n",
        "        \"input\": example.inputs[\"question\"],\n",
        "    },\n",
        ")\n",
        "\n",
        "print_evaluator_prompt(labeled_score_evaluator)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "fca738ae",
      "metadata": {},
      "source": [
        "Execute the Evaluation, and Check the result that returned."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "id": "91a5d982",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "View the evaluation results for experiment: 'LABELED-SCORE-EVAL-ca73be6c' at:\n",
            "https://smith.langchain.com/o/9089d1d3-e786-4000-8468-66153f05444b/datasets/9b4ca107-33fe-4c71-bb7f-488272d895a3/compare?selectedSessions=a846a6af-3409-4907-9d90-849e0532533f\n",
            "\n",
            "\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b501242c01fe4ca4a3a29b1d0a4efcb0",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "0it [00:00, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from langsmith.evaluation import evaluate\n",
        "\n",
        "# Execute evaluatoin\n",
        "experiment_results = evaluate(\n",
        "    ask_question,\n",
        "    data=dataset_name,\n",
        "    evaluators=[labeled_score_evaluator],\n",
        "    experiment_prefix=\"LABELED-SCORE-EVAL\",\n",
        "    # Specify experiment metadata\n",
        "    metadata={\n",
        "        \"variant\": \"Evaluation with Labeled_score Evaluator\",\n",
        "    },\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "b4089fa8",
      "metadata": {},
      "source": [
        "![labeled-score-eval](./assets/05-langsmith-llm-as-judge-05.png)"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "langchain-tutorial",
      "language": "python",
      "name": "langchain_tutorial"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.11.9"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}
