{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2da95378",
   "metadata": {},
   "source": [
    "# String Distance\n",
    "\n",
    "One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance.  This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
    "\n",
    "This can be accessed using the `string_distance` evaluator, which uses distance metric's from the [rapidfuzz](https://github.com/maxbachmann/RapidFuzz) library.\n",
    "\n",
    "**Note:** The returned scores are _distances_, meaning lower is typically \"better\".\n",
    "\n",
    "For more information, check out the reference docs for the [StringDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html#langchain.evaluation.string_distance.base.StringDistanceEvalChain) for more info."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "8b47b909-3251-4774-9a7d-e436da4f8979",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# %pip install rapidfuzz"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "f6790c46",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from langchain.evaluation import load_evaluator\n",
    "\n",
    "evaluator = load_evaluator(\"string_distance\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "49ad9139",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'score': 0.11555555555555552}"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "evaluator.evaluate_strings(\n",
    "    prediction=\"The job is completely done.\",\n",
    "    reference=\"The job is done\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "c06a2296",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'score': 0.0724999999999999}"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# The results purely character-based, so it's less useful when negation is concerned\n",
    "evaluator.evaluate_strings(\n",
    "    prediction=\"The job is done.\",\n",
    "    reference=\"The job isn't done\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b8ed1f12-09a6-4e90-a69d-c8df525ff293",
   "metadata": {},
   "source": [
    "## Configure the String Distance Metric\n",
    "\n",
    "By default, the `StringDistanceEvalChain` uses  levenshtein distance, but it also supports other string distance algorithms. Configure using the `distance` argument."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "a88bc7d7-62d3-408d-b0e0-43abcecf35c8",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<StringDistance.DAMERAU_LEVENSHTEIN: 'damerau_levenshtein'>,\n",
       " <StringDistance.LEVENSHTEIN: 'levenshtein'>,\n",
       " <StringDistance.JARO: 'jaro'>,\n",
       " <StringDistance.JARO_WINKLER: 'jaro_winkler'>]"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain.evaluation import StringDistance\n",
    "\n",
    "list(StringDistance)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "0c079864-0175-4d06-9d3f-a0e51dd3977c",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "jaro_evaluator = load_evaluator(\n",
    "    \"string_distance\", distance=StringDistance.JARO\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "a8dfb900-14f3-4a1f-8736-dd1d86a1264c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'score': 0.19259259259259254}"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "jaro_evaluator.evaluate_strings(\n",
    "    prediction=\"The job is completely done.\",\n",
    "    reference=\"The job is done\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "7020b046-0ef7-40cc-8778-b928e35f3ce1",
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'score': 0.12083333333333324}"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "jaro_evaluator.evaluate_strings(\n",
    "    prediction=\"The job is done.\",\n",
    "    reference=\"The job isn't done\",\n",
    ")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
