{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "authorship_tag": "ABX9TyMWORqfJY8aBvCdSDqE4xwH",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/sugarforever/LangChain-Tutorials/blob/main/LangChain_Caching.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Understanding LangChain Caching\n",
        "\n",
        "In this notebook, we will see:\n",
        "1. How LangChain framework uses caching mechanism to improve LLM interaction efficiency.\n",
        "2. The caching algorithms of 2 different underlying storages, In-Memory and SQLite.\n",
        "\n",
        "Hope it will help you understand if and when you should use CACHE."
      ],
      "metadata": {
        "id": "oA823Xm-LG0c"
      }
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "VXeDv_iNy4Yt"
      },
      "outputs": [],
      "source": [
        "!pip install langchain openai --quiet --upgrade"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "os.environ['OPENAI_API_KEY'] = 'your openai api key'"
      ],
      "metadata": {
        "id": "EOakFOzKzApQ"
      },
      "execution_count": 2,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Get your ChatOpenAI instance ready"
      ],
      "metadata": {
        "id": "GFr7TU4yMINW"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import langchain\n",
        "from langchain.chat_models import ChatOpenAI\n",
        "\n",
        "llm = ChatOpenAI()"
      ],
      "metadata": {
        "id": "9dBrwRdizKI4"
      },
      "execution_count": 3,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 1. In Memory Cache"
      ],
      "metadata": {
        "id": "U9LI6MsOMYDz"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain.cache import InMemoryCache\n",
        "langchain.llm_cache = InMemoryCache()"
      ],
      "metadata": {
        "id": "pZkMY2T1zRJ4"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Ask a question and measure how long it takes for LLM to respond."
      ],
      "metadata": {
        "id": "A_0EEZ6BMgHf"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm.predict(\"What is OpenAI?\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 128
        },
        "id": "EOxV7tZwz0Jz",
        "outputId": "9bb819c6-78f1-46e2-c7df-8ebcd1cc6f6a"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 25 ms, sys: 6.4 ms, total: 31.4 ms\n",
            "Wall time: 4.54 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "\"OpenAI is an artificial intelligence research laboratory and company that aims to ensure that artificial general intelligence (AGI) benefits all of humanity. It was founded in December 2015 as a non-profit organization but later transformed into a for-profit company called OpenAI LP in 2019. OpenAI conducts research in various fields of AI, develops cutting-edge technologies, and publishes most of its AI research findings. The organization's mission is to ensure that AGI is developed safely, is aligned with human values, and is used for the benefit of all individuals.\""
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 45
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### How the cache stores data\n",
        "\n",
        "**source code**: [cache.py](https://github.com/hwchase17/langchain/blob/v0.0.219/langchain/cache.py#L102)\n",
        "```python\n",
        "class InMemoryCache(BaseCache):\n",
        "    \"\"\"Cache that stores things in memory.\"\"\"\n",
        "\n",
        "    def __init__(self) -> None:\n",
        "        \"\"\"Initialize with empty cache.\"\"\"\n",
        "        self._cache: Dict[Tuple[str, str], RETURN_VAL_TYPE] = {}\n",
        "```\n",
        "\n",
        "This is the implementation of InMemoryCache."
      ],
      "metadata": {
        "id": "wNVz-A70OB65"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# First element of the tuple\n",
        "list(langchain.llm_cache._cache.keys())[0][0]"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 36
        },
        "id": "795SesMkKsx1",
        "outputId": "6602dee3-af40-4211-9ea6-c957c565babb"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'[{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"schema\", \"HumanMessage\"], \"kwargs\": {\"content\": \"What is OpenAI?\"}}]'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 57
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# Second element of the tuple\n",
        "list(langchain.llm_cache._cache.keys())[0][1]"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 54
        },
        "id": "cKoCMgV1J0Qy",
        "outputId": "a3ae41b4-2983-458f-ee38-9d5390df660a"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"chat_models\", \"openai\", \"ChatOpenAI\"], \"kwargs\": {\"openai_api_key\": {\"lc\": 1, \"type\": \"secret\", \"id\": [\"OPENAI_API_KEY\"]}}}---[(\\'stop\\', None)]'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 58
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Ask same question again and see the quicker response."
      ],
      "metadata": {
        "id": "s0HhgOmWMomV"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm.predict(\"What is OpenAI?\")"
      ],
      "metadata": {
        "id": "GDykW5yiMw-b"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 2. SQLite as Cache"
      ],
      "metadata": {
        "id": "vYonfxGfMx3t"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!rm -f .cache.db"
      ],
      "metadata": {
        "id": "EP3QRaPy0mp1"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain.cache import SQLiteCache\n",
        "langchain.llm_cache = SQLiteCache(database_path=\".cache.db\")"
      ],
      "metadata": {
        "id": "yRFlThqU0tfU"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Ask the same question twice and measure the performance difference"
      ],
      "metadata": {
        "id": "2j1dHYmGM5WK"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm.predict(\"What is OpenAI?\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 128
        },
        "id": "WsfcYsU40yFR",
        "outputId": "e0128db0-d992-4037-c91d-b63847cf905b"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 4.25 ms, sys: 980 µs, total: 5.23 ms\n",
            "Wall time: 4.97 ms\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'OpenAI is an artificial intelligence research laboratory and company. It was founded in December 2015 with the goal of developing and promoting friendly AI for the benefit of all humanity. OpenAI conducts cutting-edge research in various areas of AI and aims to ensure that artificial general intelligence (AGI) benefits everyone and is used responsibly. They work on advancing AI technology, publishing most of their AI research, and providing public goods to help society navigate the path to AGI. OpenAI also develops and deploys AI models and systems, such as the language model GPT-3, to showcase the capabilities and potential applications of AI.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 19
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm.predict(\"What is OpenAI?\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 128
        },
        "id": "m_HFoa-Z052V",
        "outputId": "728a714a-83d4-42fc-d289-c376915c0152"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 39.3 ms, sys: 9.16 ms, total: 48.5 ms\n",
            "Wall time: 4.84 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'OpenAI is an artificial intelligence research lab and company founded in December 2015. Its mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI conducts research to develop safe and beneficial AI technologies and also aims to promote the widespread adoption of such technologies for societal benefit. The organization has made significant contributions to the field of AI, particularly in areas such as natural language processing, reinforcement learning, and robotics. OpenAI also develops and maintains various open-source AI tools and frameworks to facilitate the development and deployment of AI applications.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 42
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Add some space in the sentence and ask again"
      ],
      "metadata": {
        "id": "hHIjReJUM_gD"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm.predict(\"What is  OpenAI?\")"
      ],
      "metadata": {
        "id": "hdD1CpzSNFzo"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import sqlalchemy\n",
        "from sqlalchemy import create_engine, text\n",
        "engine = create_engine(\"sqlite:///.cache.db\")"
      ],
      "metadata": {
        "id": "TvI7KFBLGfTn"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### **Why does the extra space cause the cache miss??**"
      ],
      "metadata": {
        "id": "Y4LAKj76NJiZ"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### How SQLite stores cache data\n",
        "\n",
        "**source code**: [cache.py](https://github.com/hwchase17/langchain/blob/v0.0.219/langchain/cache.py#L128)\n",
        "```python\n",
        "class FullLLMCache(Base):  # type: ignore\n",
        "    \"\"\"SQLite table for full LLM Cache (all generations).\"\"\"\n",
        "\n",
        "    __tablename__ = \"full_llm_cache\"\n",
        "    prompt = Column(String, primary_key=True)\n",
        "    llm = Column(String, primary_key=True)\n",
        "    idx = Column(Integer, primary_key=True)\n",
        "    response = Column(String)\n",
        "\n",
        "\n",
        "class SQLAlchemyCache(BaseCache):\n",
        "    \"\"\"Cache that uses SQAlchemy as a backend.\"\"\"\n",
        "\n",
        "    def __init__(self, engine: Engine, cache_schema: Type[FullLLMCache] = FullLLMCache):\n",
        "        \"\"\"Initialize by creating all tables.\"\"\"\n",
        "        self.engine = engine\n",
        "        self.cache_schema = cache_schema\n",
        "        self.cache_schema.metadata.create_all(self.engine)\n",
        "```\n",
        "\n",
        "This is the schema of cache table `full_llm_cache`."
      ],
      "metadata": {
        "id": "hlKSAcOOOSqA"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "with engine.connect() as connection:\n",
        "\n",
        "    rs = connection.exec_driver_sql('select * from full_llm_cache')\n",
        "    print(rs.keys())\n",
        "    for row in rs:\n",
        "        print(row)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "RK8cQdbkGrNk",
        "outputId": "4c986b1d-1dfb-49d8-caf3-66e9f03f62a0"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "RMKeyView(['prompt', 'llm', 'idx', 'response'])\n",
            "('[{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"schema\", \"HumanMessage\"], \"kwargs\": {\"content\": \"What is OpenAI?\"}}]', '{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"chat_models\", \"openai\", \"ChatOpenAI\"], \"kwargs\": {\"openai_api_key\": {\"lc\": 1, \"type\": \"secret\", \"id\": [\"OPENAI_API_KEY\"]}}}---[(\\'stop\\', None)]', 0, '{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"schema\", \"ChatGeneration\"], \"kwargs\": {\"message\": {\"lc\": 1, \"type\": \"constructor\", \"id\": [\"lang ... (588 characters truncated) ... AI models and systems, such as the language model GPT-3, to showcase the capabilities and potential applications of AI.\", \"additional_kwargs\": {}}}}}')\n",
            "('[{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"schema\", \"HumanMessage\"], \"kwargs\": {\"content\": \"What is  OpenAI?\"}}]', '{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"chat_models\", \"openai\", \"ChatOpenAI\"], \"kwargs\": {\"openai_api_key\": {\"lc\": 1, \"type\": \"secret\", \"id\": [\"OPENAI_API_KEY\"]}}}---[(\\'stop\\', None)]', 0, '{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"schema\", \"ChatGeneration\"], \"kwargs\": {\"message\": {\"lc\": 1, \"type\": \"constructor\", \"id\": [\"lang ... (594 characters truncated) ...  maintains various open-source AI tools and frameworks to facilitate the development and deployment of AI applications.\", \"additional_kwargs\": {}}}}}')\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Semantic Cache\n",
        "\n",
        "Semantic cache stores prompts and responses, and evaluate hits based on semantic similarity."
      ],
      "metadata": {
        "id": "a46Ty0dmfhRH"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install langchain openai --quiet --upgrade"
      ],
      "metadata": {
        "id": "O5tV0sRSqIUT"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "os.environ['OPENAI_API_KEY'] = 'your openai api key'"
      ],
      "metadata": {
        "id": "G3IOojuZqcUy"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Follow [Redis official doc](https://redis.com/blog/running-redis-on-google-colab/) to install and start redis server on google colab."
      ],
      "metadata": {
        "id": "EwC6ItJngOoo"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!curl -fsSL https://packages.redis.io/redis-stack/redis-stack-server-6.2.6-v7.focal.x86_64.tar.gz -o redis-stack-server.tar.gz\n",
        "!tar -xvf redis-stack-server.tar.gz\n",
        "!pip install redis\n",
        "\n",
        "!./redis-stack-server-6.2.6-v7/bin/redis-stack-server --daemonize yes"
      ],
      "metadata": {
        "id": "AHH23I9ngMGy",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "ce6c1022-0d52-4a7d-d13f-56cdc7cf287a"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "./\n",
            "./redis-stack-server-6.2.6-v7/\n",
            "./redis-stack-server-6.2.6-v7/bin/\n",
            "./redis-stack-server-6.2.6-v7/bin/redis-benchmark\n",
            "./redis-stack-server-6.2.6-v7/bin/redis-cli\n",
            "./redis-stack-server-6.2.6-v7/bin/redis-sentinel\n",
            "./redis-stack-server-6.2.6-v7/bin/redis-stack-server\n",
            "./redis-stack-server-6.2.6-v7/bin/redis-check-rdb\n",
            "./redis-stack-server-6.2.6-v7/bin/redis-check-aof\n",
            "./redis-stack-server-6.2.6-v7/bin/redis-server\n",
            "./redis-stack-server-6.2.6-v7/share/\n",
            "./redis-stack-server-6.2.6-v7/share/RSAL_LICENSE\n",
            "./redis-stack-server-6.2.6-v7/share/APACHE_LICENSE\n",
            "./redis-stack-server-6.2.6-v7/lib/\n",
            "./redis-stack-server-6.2.6-v7/lib/redisgraph.so\n",
            "./redis-stack-server-6.2.6-v7/lib/redistimeseries.so\n",
            "./redis-stack-server-6.2.6-v7/lib/rejson.so\n",
            "./redis-stack-server-6.2.6-v7/lib/redisbloom.so\n",
            "./redis-stack-server-6.2.6-v7/lib/redisearch.so\n",
            "./redis-stack-server-6.2.6-v7/etc/\n",
            "./redis-stack-server-6.2.6-v7/etc/README\n",
            "./redis-stack-server-6.2.6-v7/etc/redis-stack.conf\n",
            "./redis-stack-server-6.2.6-v7/etc/redis-stack-service.conf\n",
            "Collecting redis\n",
            "  Downloading redis-4.6.0-py3-none-any.whl (241 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m241.1/241.1 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: async-timeout>=4.0.2 in /usr/local/lib/python3.10/dist-packages (from redis) (4.0.2)\n",
            "Installing collected packages: redis\n",
            "Successfully installed redis-4.6.0\n",
            "Starting redis-stack-server, database path ./redis-stack-server-6.2.6-v7/var/db/redis-stack\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import langchain\n",
        "from langchain.llms import OpenAI\n",
        "\n",
        "# To make the caching really obvious, lets use a slower model.\n",
        "llm = OpenAI(model_name=\"text-davinci-002\", n=2, best_of=2)"
      ],
      "metadata": {
        "id": "flJ_q0ymfyEb"
      },
      "execution_count": 12,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Initialize the Redis semantic cache with default score threshold 0.2"
      ],
      "metadata": {
        "id": "m3GDztDVpUGQ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain.embeddings import OpenAIEmbeddings\n",
        "from langchain.cache import RedisSemanticCache\n",
        "\n",
        "\n",
        "langchain.llm_cache = RedisSemanticCache(redis_url=\"redis://localhost:6379\", embedding=OpenAIEmbeddings(), score_threshold=0.2)"
      ],
      "metadata": {
        "id": "YylPXZ2dgiHc"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"Please translate 'this is Monday' into Chinese\")"
      ],
      "metadata": {
        "id": "r3c0garCgp-9",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "outputId": "39c63d6f-dc8a-4e65-88c0-07052f6e9130"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 74.4 ms, sys: 7.11 ms, total: 81.5 ms\n",
            "Wall time: 2.19 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\n这是周一'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 9
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "Notice that, the query below is 1 word different from the previous one. Cache got similarily hit."
      ],
      "metadata": {
        "id": "dIu-A7Wxn0sT"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"Please translate 'this is Tuesday' into Chinese\")"
      ],
      "metadata": {
        "id": "bm_QBd4gnw_w",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "outputId": "a8a4835d-e0c1-449b-f64b-d1372ce524da"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 6.35 ms, sys: 0 ns, total: 6.35 ms\n",
            "Wall time: 211 ms\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\n这是周一'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 10
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"Tell me a joke\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "id": "KB27Wo5ihC-C",
        "outputId": "50487775-f2a4-4f65-8d65-ecc33db0c4b0"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 34.2 ms, sys: 2.85 ms, total: 37 ms\n",
            "Wall time: 3.88 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 11
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"Tell me 2 jokes\")"
      ],
      "metadata": {
        "id": "Xbp5si6tpb7E",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "outputId": "d02dfad8-9d7d-4672-f25e-b5c2f3938548"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 7.27 ms, sys: 0 ns, total: 7.27 ms\n",
            "Wall time: 247 ms\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 12
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Initialize the Redis semantic cache with default score threshold 0.05"
      ],
      "metadata": {
        "id": "lvggtYaDpiV1"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "langchain.llm_cache = RedisSemanticCache(redis_url=\"redis://localhost:6379\", embedding=OpenAIEmbeddings(), score_threshold=0.05)"
      ],
      "metadata": {
        "id": "zIjlyavcpk9F"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"Give me a peach\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 91
        },
        "id": "ybsRNjYhhIs5",
        "outputId": "9f40ba16-85dc-4bec-d6b6-344495160a4f"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 22.8 ms, sys: 61 µs, total: 22.9 ms\n",
            "Wall time: 1.49 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nA peach is a smooth, round fruit with a soft, velvety skin. The flesh is sweet and juicy, with a hint of acidity. Peaches are a good source of vitamins A and C, as well as fiber.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 31
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"Give me 2 peaches\")"
      ],
      "metadata": {
        "id": "zfq2rErdk5zZ",
        "outputId": "6bf6dfdb-f6e8-4772-d7f3-289e6aedd96a",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 16.9 ms, sys: 0 ns, total: 16.9 ms\n",
            "Wall time: 939 ms\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "\"\\n\\nI can't give you anything because this is not a real request.\""
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 32
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Deep dive into Redis semantic cache"
      ],
      "metadata": {
        "id": "p25RH2EioFmv"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Find the keys in the cache"
      ],
      "metadata": {
        "id": "nimL_nl7oK1s"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "langchain.llm_cache._cache_dict"
      ],
      "metadata": {
        "id": "l4eGNhNIng9K",
        "outputId": "42c7433e-e993-45a8-feef-44a622b925d3",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "{'cache:bf6f6d9ebdf492e28cb8bf4878a4b951': <langchain.vectorstores.redis.Redis at 0x7fed7bd13310>}"
            ]
          },
          "metadata": {},
          "execution_count": 33
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### Manually execute similarity search to fetch the similar documents with scores\n",
        "\n",
        "You should expect that the more similar the document is, the smaller the score will be."
      ],
      "metadata": {
        "id": "y-GsORdkoQOG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "langchain.llm_cache._cache_dict['cache:bf6f6d9ebdf492e28cb8bf4878a4b951'].similarity_search_with_score(query='Give me 2 peaches')"
      ],
      "metadata": {
        "id": "ZqSiGnAsmC2p",
        "outputId": "fa393c10-7835-41ea-ce06-454d2128e48f",
        "colab": {
          "base_uri": "https://localhost:8080/"
        }
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[(Document(page_content='Give me 2 peaches', metadata={'llm_string': \"[('_type', 'openai'), ('best_of', 2), ('frequency_penalty', 0), ('logit_bias', {}), ('max_tokens', 256), ('model_name', 'text-davinci-002'), ('n', 2), ('presence_penalty', 0), ('request_timeout', None), ('stop', None), ('temperature', 0.7), ('top_p', 1)]\", 'prompt': 'Give me 2 peaches', 'return_val': [\"\\n\\nI can't give you anything because this is not a real request.\", \"\\n\\nI can't do that.\"]}),\n",
              "  2.38418579102e-07),\n",
              " (Document(page_content='Give me a peach', metadata={'llm_string': \"[('_type', 'openai'), ('best_of', 2), ('frequency_penalty', 0), ('logit_bias', {}), ('max_tokens', 256), ('model_name', 'text-davinci-002'), ('n', 2), ('presence_penalty', 0), ('request_timeout', None), ('stop', None), ('temperature', 0.7), ('top_p', 1)]\", 'prompt': 'Give me a peach', 'return_val': ['\\n\\nA peach is a smooth, round fruit with a soft, velvety skin. The flesh is sweet and juicy, with a hint of acidity. Peaches are a good source of vitamins A and C, as well as fiber.', '\\n\\n\\nA peach is a type of fruit that is typically round and reddish-orange in color. Peaches are known for their sweetness and are often used in desserts such as pies and cobblers.']}),\n",
              "  0.0553156137466),\n",
              " (Document(page_content='Give me 2 apples', metadata={'llm_string': \"[('_type', 'openai'), ('best_of', 2), ('frequency_penalty', 0), ('logit_bias', {}), ('max_tokens', 256), ('model_name', 'text-davinci-002'), ('n', 2), ('presence_penalty', 0), ('request_timeout', None), ('stop', None), ('temperature', 0.7), ('top_p', 1)]\", 'prompt': 'Give me 2 apples', 'return_val': ['\\n\\nYou will have to wait until the apple tree grows more apples.', ', 1 banana, and 1 orange\\n\\n2 apples, 1 banana, 1 orange']}),\n",
              "  0.0637553334236),\n",
              " (Document(page_content='Give me an apple', metadata={'llm_string': \"[('_type', 'openai'), ('best_of', 2), ('frequency_penalty', 0), ('logit_bias', {}), ('max_tokens', 256), ('model_name', 'text-davinci-002'), ('n', 2), ('presence_penalty', 0), ('request_timeout', None), ('stop', None), ('temperature', 0.7), ('top_p', 1)]\", 'prompt': 'Give me an apple', 'return_val': ['\\n\\nHere is an apple.', '\\n\\nThank you!']}),\n",
              "  0.116876840591)]"
            ]
          },
          "metadata": {},
          "execution_count": 34
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Conclusion\n",
        "\n",
        "The score threshold is the key factor in using Redis semantic cache for similarity cache."
      ],
      "metadata": {
        "id": "VspGA_wSokS4"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Semantic Cache with GPTCache"
      ],
      "metadata": {
        "id": "PBXKgATeWMVZ"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### What is GPTCache?\n",
        "\n",
        "An open source project dedicated to building a semantic cache for storing LLM responses.\n",
        "\n",
        "Two use cases:\n",
        "1. Exact match\n",
        "2. Similar match\n",
        "\n",
        "GPTCache addressed the following questions:\n",
        "1. How to generate embeddings for the queries? (via embedding function)\n",
        "2. How to cache the data? (via cache store of data manager, such as SQLite, MySQL, and PostgreSQL. More NoSQL databases will be added in the future)\n",
        "3. How to store and search vector embeddings? (via vector store of data manager, such as FAISS or vector databases such as Milvus. More vector databases and cloud services will be added in the future.)\n",
        "4. How to determine eviction policy? (LRU or FIFO)\n",
        "5. How to determine cache hit or miss? (via evaluation function)\n",
        "\n",
        "Please refer to the following Cache class definition for better understanding of how above questions are addressed:\n",
        "\n",
        "```python\n",
        "class Cache:\n",
        "   def init(self,\n",
        "            cache_enable_func=cache_all,\n",
        "            pre_embedding_func=last_content,\n",
        "            embedding_func=string_embedding,\n",
        "            data_manager: DataManager = get_data_manager(),\n",
        "            similarity_evaluation=ExactMatchEvaluation(),\n",
        "            post_process_messages_func=first,\n",
        "            config=Config(),\n",
        "            next_cache=None,\n",
        "            **kwargs\n",
        "            ):\n",
        "       self.has_init = True\n",
        "       self.cache_enable_func = cache_enable_func\n",
        "       self.pre_embedding_func = pre_embedding_func\n",
        "       self.embedding_func = embedding_func\n",
        "       self.data_manager: DataManager = data_manager\n",
        "       self.similarity_evaluation = similarity_evaluation\n",
        "       self.post_process_messages_func = post_process_messages_func\n",
        "       self.data_manager.init(**kwargs)\n",
        "       self.config = config\n",
        "       self.next_cache = next_cache\n",
        "```"
      ],
      "metadata": {
        "id": "vwFaSeFWWe9r"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install gptcache --quiet"
      ],
      "metadata": {
        "id": "Sp9gZNC2WXA7"
      },
      "execution_count": 5,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import langchain\n",
        "from langchain.llms import OpenAI\n",
        "\n",
        "llm = OpenAI(model_name=\"text-davinci-002\", n=2, best_of=2)"
      ],
      "metadata": {
        "id": "Zy5CCHJl2yla"
      },
      "execution_count": 6,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Exact Match"
      ],
      "metadata": {
        "id": "4SHDJ27DW5ny"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from gptcache import Cache\n",
        "from gptcache.manager.factory import manager_factory\n",
        "from gptcache.processor.pre import get_prompt\n",
        "from gptcache.adapter.api import init_similar_cache\n",
        "from langchain.cache import GPTCache\n",
        "import hashlib\n",
        "\n",
        "def get_hashed_name(name):\n",
        "    return hashlib.sha256(name.encode()).hexdigest()\n",
        "\n",
        "\n",
        "def init_gptcache(cache_obj: Cache, llm: str):\n",
        "    hashed_llm = get_hashed_name(llm)\n",
        "    cache_obj.init(\n",
        "        pre_embedding_func=get_prompt,\n",
        "        data_manager=manager_factory(manager=\"map\", data_dir=f\"map_cache_{hashed_llm}\"),\n",
        "    )\n",
        "\n",
        "\n",
        "langchain.llm_cache = GPTCache(init_gptcache)"
      ],
      "metadata": {
        "id": "oS_kGsN1XCfm"
      },
      "execution_count": 7,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "question = \"What is cache eviction policy?\""
      ],
      "metadata": {
        "id": "KlldiTqFrzrm"
      },
      "execution_count": 11,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(question)"
      ],
      "metadata": {
        "id": "HNF18bP6XE_c",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 91
        },
        "outputId": "8b3df86b-7e52-4cd4-a55e-923a25ee937f"
      },
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 10.6 ms, sys: 3.74 ms, total: 14.3 ms\n",
            "Wall time: 1.69 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nA cache eviction policy is a strategy for managing the contents of a cache. When a cache becomes full, the policy determines which items will be removed to make room for new items.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 12
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(question)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 91
        },
        "id": "nnXOyMwWrtla",
        "outputId": "2be6a860-35f8-4089-85c4-37a47bcb61ee"
      },
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 627 µs, sys: 0 ns, total: 627 µs\n",
            "Wall time: 634 µs\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nA cache eviction policy is a strategy for managing the contents of a cache. When a cache becomes full, the policy determines which items will be removed to make room for new items.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 13
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"What is cache eviction   policy?\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "id": "Vqp-2H-sBp64",
        "outputId": "8893f8e3-6913-4e1c-ee50-d6dcfcc06883"
      },
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 10.7 ms, sys: 87 µs, total: 10.8 ms\n",
            "Wall time: 966 ms\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nThere are several cache eviction policies, but the two most common are least recently used (LRU) and first in, first out (FIFO).'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 14
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Similar Match"
      ],
      "metadata": {
        "id": "BixAYE1ysCda"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from gptcache import Cache\n",
        "from gptcache.adapter.api import init_similar_cache\n",
        "from langchain.cache import GPTCache\n",
        "import hashlib\n",
        "\n",
        "\n",
        "def get_hashed_name(name):\n",
        "    return hashlib.sha256(name.encode()).hexdigest()\n",
        "\n",
        "\n",
        "def init_gptcache(cache_obj: Cache, llm: str):\n",
        "    hashed_llm = get_hashed_name(llm)\n",
        "    init_similar_cache(cache_obj=cache_obj, data_dir=f\"similar_cache_{hashed_llm}\")\n",
        "\n",
        "\n",
        "langchain.llm_cache = GPTCache(init_gptcache)"
      ],
      "metadata": {
        "id": "847yNhrlsCEt"
      },
      "execution_count": 15,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(question)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "id": "MssW4YLXsL4B",
        "outputId": "43ad7c5c-8621-41bc-eb81-c125ee08bcb1"
      },
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 1.94 s, sys: 313 ms, total: 2.26 s\n",
            "Wall time: 2.49 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nA cache eviction policy is a set of rules that determine when and how often cached data is removed from the cache.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 16
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(question)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "id": "t0sIlm0xsYf2",
        "outputId": "33b72b52-4ad7-487e-90e0-a9cc2b6b547e"
      },
      "execution_count": 17,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 1.21 s, sys: 342 µs, total: 1.21 s\n",
            "Wall time: 636 ms\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nA cache eviction policy is a set of rules that determine when and how often cached data is removed from the cache.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 17
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"What is cache eviction   policy?\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "id": "Wnq0CDSBsd2y",
        "outputId": "ae860031-f9ce-4912-dac9-28e54ec4baca"
      },
      "execution_count": 18,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 1.14 s, sys: 0 ns, total: 1.14 s\n",
            "Wall time: 1.01 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nA cache eviction policy is a set of rules that determine when and how often cached data is removed from the cache.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 18
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"Give me a peach\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "id": "1MYGLb8GCiEB",
        "outputId": "90663424-ec3a-4c85-9ca8-374b517566ba"
      },
      "execution_count": 19,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 2.52 s, sys: 284 ms, total: 2.8 s\n",
            "Wall time: 5.54 s\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nA peach is a fruit that is typically round or oval in shape and has a soft, fuzzy outer skin. The flesh of a peach is usually yellow or white and is sweet and juicy.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 19
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "\n",
        "llm(\"Give me 2 peaches\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 73
        },
        "id": "e_PX54xiCi7g",
        "outputId": "982e42bd-edcd-45b5-fc47-cd15383168d5"
      },
      "execution_count": 20,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CPU times: user 1.18 s, sys: 22.6 ms, total: 1.2 s\n",
            "Wall time: 645 ms\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'\\n\\nA peach is a fruit that is typically round or oval in shape and has a soft, fuzzy outer skin. The flesh of a peach is usually yellow or white and is sweet and juicy.'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 20
        }
      ]
    }
  ]
}