{"cells":[{"cell_type":"markdown","id":"67a0b8fb","metadata":{"id":"67a0b8fb"},"source":["# 创建一个拥有上下文记忆的RAG 链和agent应用 | 🦜️🔗 LangChain\n","\n","[https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/](https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/)\n","\n","在许多问答形式的应用程序中，允许用户进行多轮对话，这意味着应用程序需要记忆过去问题和答案，并且按照一定的方法将它们整合到当前对话中。\n","\n","在本指南中，我们着重**添加整合历史消息的逻辑**。有关聊天历史管理的更多细节，请参阅[这里](https://python.langchain.com/v0.2/docs/how_to/message_history/)。\n","\n","我们将介绍两种方法：\n","\n","1. 链式方法，其中我们总是执行检索步骤；\n","2. agent方法，在这种方法中，我们让LLM自行决定是否以及如何执行检索步骤（或多个步骤）。\n","\n","对于外部知识来源，我们将使用同一篇文章，来自[Lilian Weng的LLM动力自主代理](https://lilianweng.github.io/posts/2023-06-23-agent/)博客，来自[RAG教程](https://python.langchain.com/v0.2/docs/tutorials/rag/)。\n","\n","## 设置\n","\n","### 依赖\n","\n","我们将在本教程中使用OpenAI嵌入模型和Chroma矢量存储，但这里演示的所有内容都可以使用langchain提供的任何[嵌入](https://python.langchain.com/v0.2/docs/concepts/#embedding-models)模型，[VectorStore](https://python.langchain.com/v0.2/docs/concepts/#vectorstores)向量存储或[Retriever](https://python.langchain.com/v0.2/docs/concepts/#retrievers)检索器。"]},{"cell_type":"code","execution_count":1,"id":"ac8ff7ae","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"ac8ff7ae","executionInfo":{"status":"ok","timestamp":1718326681676,"user_tz":-480,"elapsed":62953,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"3b4cce3a-0e06-4ab9-a7cc-62b6ee73e689"},"outputs":[{"output_type":"stream","name":"stdout","text":["\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m974.2/974.2 kB\u001b[0m \u001b[31m8.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.2/2.2 MB\u001b[0m \u001b[31m38.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m85.6/85.6 kB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m315.5/315.5 kB\u001b[0m \u001b[31m6.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m125.2/125.2 kB\u001b[0m \u001b[31m4.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m526.8/526.8 kB\u001b[0m \u001b[31m12.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m92.0/92.0 kB\u001b[0m \u001b[31m3.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m325.5/325.5 kB\u001b[0m \u001b[31m14.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.1/1.1 MB\u001b[0m \u001b[31m18.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.4/2.4 MB\u001b[0m \u001b[31m13.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.4/62.4 kB\u001b[0m \u001b[31m3.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m41.3/41.3 kB\u001b[0m \u001b[31m1.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m6.8/6.8 MB\u001b[0m \u001b[31m44.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m59.9/59.9 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m107.0/107.0 kB\u001b[0m \u001b[31m6.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m67.3/67.3 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[?25h  Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n","  Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n","  Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m283.7/283.7 kB\u001b[0m \u001b[31m24.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.7/1.7 MB\u001b[0m \u001b[31m60.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m67.6/67.6 kB\u001b[0m \u001b[31m7.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m145.0/145.0 kB\u001b[0m \u001b[31m15.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.2/49.2 kB\u001b[0m \u001b[31m4.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.9/71.9 kB\u001b[0m \u001b[31m2.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m7.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m53.6/53.6 kB\u001b[0m \u001b[31m5.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m307.7/307.7 kB\u001b[0m \u001b[31m28.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.9/77.9 kB\u001b[0m \u001b[31m8.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m6.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m46.0/46.0 kB\u001b[0m \u001b[31m4.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m52.5/52.5 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m130.5/130.5 kB\u001b[0m \u001b[31m13.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m341.4/341.4 kB\u001b[0m \u001b[31m28.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.4/3.4 MB\u001b[0m \u001b[31m82.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.2/1.2 MB\u001b[0m \u001b[31m60.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m130.2/130.2 kB\u001b[0m \u001b[31m13.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m86.8/86.8 kB\u001b[0m \u001b[31m7.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n","\u001b[?25h  Building wheel for pypika (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n"]}],"source":["pip install --upgrade --quiet langchain langchain-community langchainhub langchain-chroma bs4 langchain-openai langgraph"]},{"cell_type":"code","source":["pip list"],"metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"MZeHnUp8BNBf","executionInfo":{"status":"ok","timestamp":1718197937839,"user_tz":-480,"elapsed":2793,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"53a4efee-a571-4522-e996-cf729f2b8c4c"},"id":"MZeHnUp8BNBf","execution_count":null,"outputs":[{"output_type":"stream","name":"stdout","text":["Package                                  Version\n","---------------------------------------- ---------------------\n","absl-py                                  1.4.0\n","aiohttp                                  3.9.5\n","aiosignal                                1.3.1\n","alabaster                                0.7.16\n","albumentations                           1.3.1\n","altair                                   4.2.2\n","annotated-types                          0.7.0\n","anyio                                    3.7.1\n","argon2-cffi                              23.1.0\n","argon2-cffi-bindings                     21.2.0\n","array_record                             0.5.1\n","arviz                                    0.15.1\n","asgiref                                  3.8.1\n","astropy                                  5.3.4\n","astunparse                               1.6.3\n","async-timeout                            4.0.3\n","atpublic                                 4.1.0\n","attrs                                    23.2.0\n","audioread                                3.0.1\n","autograd                                 1.6.2\n","Babel                                    2.15.0\n","backcall                                 0.2.0\n","backoff                                  2.2.1\n","bcrypt                                   4.1.3\n","beautifulsoup4                           4.12.3\n","bidict                                   0.23.1\n","bigframes                                1.8.0\n","bleach                                   6.1.0\n","blinker                                  1.4\n","blis                                     0.7.11\n","blosc2                                   2.0.0\n","bokeh                                    3.3.4\n","bqplot                                   0.12.43\n","branca                                   0.7.2\n","bs4                                      0.0.2\n","build                                    1.2.1\n","CacheControl                             0.14.0\n","cachetools                               5.3.3\n","catalogue                                2.0.10\n","certifi                                  2024.6.2\n","cffi                                     1.16.0\n","chardet                                  5.2.0\n","charset-normalizer                       3.3.2\n","chex                                     0.1.86\n","chroma-hnswlib                           0.7.3\n","chromadb                                 0.5.0\n","click                                    8.1.7\n","click-plugins                            1.1.1\n","cligj                                    0.7.2\n","cloudpathlib                             0.16.0\n","cloudpickle                              2.2.1\n","cmake                                    3.27.9\n","cmdstanpy                                1.2.3\n","colorcet                                 3.1.0\n","coloredlogs                              15.0.1\n","colorlover                               0.3.0\n","colour                                   0.1.5\n","community                                1.0.0b1\n","confection                               0.1.5\n","cons                                     0.4.6\n","contextlib2                              21.6.0\n","contourpy                                1.2.1\n","cryptography                             42.0.7\n","cuda-python                              12.2.1\n","cudf-cu12                                24.4.1\n","cufflinks                                0.17.3\n","cupy-cuda12x                             12.2.0\n","cvxopt                                   1.3.2\n","cvxpy                                    1.3.4\n","cycler                                   0.12.1\n","cymem                                    2.0.8\n","Cython                                   3.0.10\n","dask                                     2023.8.1\n","dataclasses-json                         0.6.7\n","datascience                              0.17.6\n","db-dtypes                                1.2.0\n","dbus-python                              1.2.18\n","debugpy                                  1.6.6\n","decorator                                4.4.2\n","defusedxml                               0.7.1\n","Deprecated                               1.2.14\n","distributed                              2023.8.1\n","distro                                   1.7.0\n","dlib                                     19.24.4\n","dm-tree                                  0.1.8\n","dnspython                                2.6.1\n","docstring_parser                         0.16\n","docutils                                 0.18.1\n","dopamine_rl                              4.0.9\n","duckdb                                   0.10.3\n","earthengine-api                          0.1.405\n","easydict                                 1.13\n","ecos                                     2.0.13\n","editdistance                             0.6.2\n","eerepr                                   0.0.4\n","email_validator                          2.1.1\n","en-core-web-sm                           3.7.1\n","entrypoints                              0.4\n","et-xmlfile                               1.1.0\n","etils                                    1.7.0\n","etuples                                  0.3.9\n","exceptiongroup                           1.2.1\n","fastai                                   2.7.15\n","fastapi                                  0.111.0\n","fastapi-cli                              0.0.4\n","fastcore                                 1.5.43\n","fastdownload                             0.0.7\n","fastjsonschema                           2.19.1\n","fastprogress                             1.0.3\n","fastrlock                                0.8.2\n","filelock                                 3.14.0\n","fiona                                    1.9.6\n","firebase-admin                           5.3.0\n","Flask                                    2.2.5\n","flatbuffers                              24.3.25\n","flax                                     0.8.4\n","folium                                   0.14.0\n","fonttools                                4.53.0\n","frozendict                               2.4.4\n","frozenlist                               1.4.1\n","fsspec                                   2023.6.0\n","future                                   0.18.3\n","gast                                     0.5.4\n","gcsfs                                    2023.6.0\n","GDAL                                     3.6.4\n","gdown                                    5.1.0\n","geemap                                   0.32.1\n","gensim                                   4.3.2\n","geocoder                                 1.38.1\n","geographiclib                            2.0\n","geopandas                                0.13.2\n","geopy                                    2.3.0\n","gin-config                               0.5.0\n","glob2                                    0.7\n","google                                   2.0.3\n","google-ai-generativelanguage             0.6.4\n","google-api-core                          2.11.1\n","google-api-python-client                 2.84.0\n","google-auth                              2.27.0\n","google-auth-httplib2                     0.1.1\n","google-auth-oauthlib                     1.2.0\n","google-cloud-aiplatform                  1.52.0\n","google-cloud-bigquery                    3.21.0\n","google-cloud-bigquery-connection         1.12.1\n","google-cloud-bigquery-storage            2.25.0\n","google-cloud-core                        2.3.3\n","google-cloud-datastore                   2.15.2\n","google-cloud-firestore                   2.11.1\n","google-cloud-functions                   1.13.3\n","google-cloud-iam                         2.15.0\n","google-cloud-language                    2.13.3\n","google-cloud-resource-manager            1.12.3\n","google-cloud-storage                     2.8.0\n","google-cloud-translate                   3.11.3\n","google-colab                             1.0.0\n","google-crc32c                            1.5.0\n","google-generativeai                      0.5.4\n","google-pasta                             0.2.0\n","google-resumable-media                   2.7.0\n","googleapis-common-protos                 1.63.1\n","googledrivedownloader                    0.4\n","graphviz                                 0.20.3\n","greenlet                                 3.0.3\n","grpc-google-iam-v1                       0.13.0\n","grpcio                                   1.64.1\n","grpcio-status                            1.48.2\n","gspread                                  6.0.2\n","gspread-dataframe                        3.3.1\n","gym                                      0.25.2\n","gym-notices                              0.0.8\n","h11                                      0.14.0\n","h5netcdf                                 1.3.0\n","h5py                                     3.9.0\n","holidays                                 0.50\n","holoviews                                1.17.1\n","html5lib                                 1.1\n","httpcore                                 1.0.5\n","httpimport                               1.3.1\n","httplib2                                 0.22.0\n","httptools                                0.6.1\n","httpx                                    0.27.0\n","huggingface-hub                          0.23.2\n","humanfriendly                            10.0\n","humanize                                 4.7.0\n","hyperopt                                 0.2.7\n","ibis-framework                           8.0.0\n","idna                                     3.7\n","imageio                                  2.31.6\n","imageio-ffmpeg                           0.5.1\n","imagesize                                1.4.1\n","imbalanced-learn                         0.10.1\n","imgaug                                   0.4.0\n","immutabledict                            4.2.0\n","importlib_metadata                       7.1.0\n","importlib_resources                      6.4.0\n","imutils                                  0.5.4\n","inflect                                  7.0.0\n","iniconfig                                2.0.0\n","intel-openmp                             2023.2.4\n","ipyevents                                2.0.2\n","ipyfilechooser                           0.6.0\n","ipykernel                                5.5.6\n","ipyleaflet                               0.18.2\n","ipython                                  7.34.0\n","ipython-genutils                         0.2.0\n","ipython-sql                              0.5.0\n","ipytree                                  0.2.2\n","ipywidgets                               7.7.1\n","itsdangerous                             2.2.0\n","jax                                      0.4.26\n","jaxlib                                   0.4.26+cuda12.cudnn89\n","jeepney                                  0.7.1\n","jellyfish                                1.0.4\n","jieba                                    0.42.1\n","Jinja2                                   3.1.4\n","joblib                                   1.4.2\n","jsonpatch                                1.33\n","jsonpickle                               3.0.4\n","jsonpointer                              3.0.0\n","jsonschema                               4.19.2\n","jsonschema-specifications                2023.12.1\n","jupyter-client                           6.1.12\n","jupyter-console                          6.1.0\n","jupyter_core                             5.7.2\n","jupyter-server                           1.24.0\n","jupyterlab_pygments                      0.3.0\n","jupyterlab_widgets                       3.0.11\n","kaggle                                   1.6.14\n","kagglehub                                0.2.5\n","keras                                    2.15.0\n","keyring                                  23.5.0\n","kiwisolver                               1.4.5\n","kubernetes                               30.1.0\n","langchain                                0.2.3\n","langchain-chroma                         0.1.1\n","langchain-community                      0.2.4\n","langchain-core                           0.2.5\n","langchain-openai                         0.1.8\n","langchain-text-splitters                 0.2.1\n","langchainhub                             0.1.18\n","langcodes                                3.4.0\n","langgraph                                0.0.66\n","langsmith                                0.1.77\n","language_data                            1.2.0\n","launchpadlib                             1.10.16\n","lazr.restfulclient                       0.14.4\n","lazr.uri                                 1.0.6\n","lazy_loader                              0.4\n","libclang                                 18.1.1\n","librosa                                  0.10.2.post1\n","lightgbm                                 4.1.0\n","linkify-it-py                            2.0.3\n","llvmlite                                 0.41.1\n","locket                                   1.0.0\n","logical-unification                      0.4.6\n","lxml                                     4.9.4\n","malloy                                   2023.1067\n","marisa-trie                              1.1.1\n","Markdown                                 3.6\n","markdown-it-py                           3.0.0\n","MarkupSafe                               2.1.5\n","marshmallow                              3.21.3\n","matplotlib                               3.7.1\n","matplotlib-inline                        0.1.7\n","matplotlib-venn                          0.11.10\n","mdit-py-plugins                          0.4.1\n","mdurl                                    0.1.2\n","miniKanren                               1.0.3\n","missingno                                0.5.2\n","mistune                                  0.8.4\n","mizani                                   0.9.3\n","mkl                                      2023.2.0\n","ml-dtypes                                0.2.0\n","mlxtend                                  0.22.0\n","mmh3                                     4.1.0\n","monotonic                                1.6\n","more-itertools                           10.1.0\n","moviepy                                  1.0.3\n","mpmath                                   1.3.0\n","msgpack                                  1.0.8\n","multidict                                6.0.5\n","multipledispatch                         1.0.0\n","multitasking                             0.0.11\n","murmurhash                               1.0.10\n","music21                                  9.1.0\n","mypy-extensions                          1.0.0\n","natsort                                  8.4.0\n","nbclassic                                1.1.0\n","nbclient                                 0.10.0\n","nbconvert                                6.5.4\n","nbformat                                 5.10.4\n","nest-asyncio                             1.6.0\n","networkx                                 3.3\n","nibabel                                  4.0.2\n","nltk                                     3.8.1\n","notebook                                 6.5.5\n","notebook_shim                            0.2.4\n","numba                                    0.58.1\n","numexpr                                  2.10.0\n","numpy                                    1.25.2\n","nvtx                                     0.2.10\n","oauth2client                             4.1.3\n","oauthlib                                 3.2.2\n","onnxruntime                              1.18.0\n","openai                                   1.33.0\n","opencv-contrib-python                    4.8.0.76\n","opencv-python                            4.8.0.76\n","opencv-python-headless                   4.10.0.82\n","openpyxl                                 3.1.3\n","opentelemetry-api                        1.25.0\n","opentelemetry-exporter-otlp-proto-common 1.25.0\n","opentelemetry-exporter-otlp-proto-grpc   1.25.0\n","opentelemetry-instrumentation            0.46b0\n","opentelemetry-instrumentation-asgi       0.46b0\n","opentelemetry-instrumentation-fastapi    0.46b0\n","opentelemetry-proto                      1.25.0\n","opentelemetry-sdk                        1.25.0\n","opentelemetry-semantic-conventions       0.46b0\n","opentelemetry-util-http                  0.46b0\n","opt-einsum                               3.3.0\n","optax                                    0.2.2\n","orbax-checkpoint                         0.4.4\n","orjson                                   3.10.4\n","osqp                                     0.6.2.post8\n","overrides                                7.7.0\n","packaging                                23.2\n","pandas                                   2.0.3\n","pandas-datareader                        0.10.0\n","pandas-gbq                               0.19.2\n","pandas-stubs                             2.0.3.230814\n","pandocfilters                            1.5.1\n","panel                                    1.3.8\n","param                                    2.1.0\n","parso                                    0.8.4\n","parsy                                    2.1\n","partd                                    1.4.2\n","pathlib                                  1.0.1\n","patsy                                    0.5.6\n","peewee                                   3.17.5\n","pexpect                                  4.9.0\n","pickleshare                              0.7.5\n","Pillow                                   9.4.0\n","pip                                      23.1.2\n","pip-tools                                6.13.0\n","platformdirs                             4.2.2\n","plotly                                   5.15.0\n","plotnine                                 0.12.4\n","pluggy                                   1.5.0\n","polars                                   0.20.2\n","pooch                                    1.8.1\n","portpicker                               1.5.2\n","posthog                                  3.5.0\n","prefetch-generator                       1.0.3\n","preshed                                  3.0.9\n","prettytable                              3.10.0\n","proglog                                  0.1.10\n","progressbar2                             4.2.0\n","prometheus_client                        0.20.0\n","promise                                  2.3\n","prompt_toolkit                           3.0.45\n","prophet                                  1.1.5\n","proto-plus                               1.23.0\n","protobuf                                 3.20.3\n","psutil                                   5.9.5\n","psycopg2                                 2.9.9\n","ptyprocess                               0.7.0\n","py-cpuinfo                               9.0.0\n","py4j                                     0.10.9.7\n","pyarrow                                  14.0.2\n","pyarrow-hotfix                           0.6\n","pyasn1                                   0.6.0\n","pyasn1_modules                           0.4.0\n","pycocotools                              2.0.7\n","pycparser                                2.22\n","pydantic                                 2.7.3\n","pydantic_core                            2.18.4\n","pydata-google-auth                       1.8.2\n","pydot                                    1.4.2\n","pydot-ng                                 2.0.0\n","pydotplus                                2.0.2\n","PyDrive                                  1.3.1\n","PyDrive2                                 1.6.3\n","pyerfa                                   2.0.1.4\n","pygame                                   2.5.2\n","Pygments                                 2.16.1\n","PyGObject                                3.42.1\n","PyJWT                                    2.3.0\n","pymc                                     5.10.4\n","pymystem3                                0.2.0\n","pynvjitlink-cu12                         0.2.3\n","PyOpenGL                                 3.1.7\n","pyOpenSSL                                24.1.0\n","pyparsing                                3.1.2\n","pyperclip                                1.8.2\n","PyPika                                   0.48.9\n","pyproj                                   3.6.1\n","pyproject_hooks                          1.1.0\n","pyshp                                    2.3.1\n","PySocks                                  1.7.1\n","pytensor                                 2.18.6\n","pytest                                   7.4.4\n","python-apt                               0.0.0\n","python-box                               7.1.1\n","python-dateutil                          2.8.2\n","python-dotenv                            1.0.1\n","python-louvain                           0.16\n","python-multipart                         0.0.9\n","python-slugify                           8.0.4\n","python-utils                             3.8.2\n","pytz                                     2023.4\n","pyviz_comms                              3.0.2\n","PyWavelets                               1.6.0\n","PyYAML                                   6.0.1\n","pyzmq                                    24.0.1\n","qdldl                                    0.1.7.post2\n","qudida                                   0.0.4\n","ratelim                                  0.1.6\n","referencing                              0.35.1\n","regex                                    2024.5.15\n","requests                                 2.31.0\n","requests-oauthlib                        1.3.1\n","requirements-parser                      0.9.0\n","rich                                     13.7.1\n","rmm-cu12                                 24.4.0\n","rpds-py                                  0.18.1\n","rpy2                                     3.4.2\n","rsa                                      4.9\n","safetensors                              0.4.3\n","scikit-image                             0.19.3\n","scikit-learn                             1.2.2\n","scipy                                    1.11.4\n","scooby                                   0.10.0\n","scs                                      3.2.4.post2\n","seaborn                                  0.13.1\n","SecretStorage                            3.3.1\n","Send2Trash                               1.8.3\n","sentencepiece                            0.1.99\n","setuptools                               67.7.2\n","shapely                                  2.0.4\n","shellingham                              1.5.4\n","simple_parsing                           0.1.5\n","six                                      1.16.0\n","sklearn-pandas                           2.2.0\n","smart-open                               6.4.0\n","sniffio                                  1.3.1\n","snowballstemmer                          2.2.0\n","sortedcontainers                         2.4.0\n","soundfile                                0.12.1\n","soupsieve                                2.5\n","soxr                                     0.3.7\n","spacy                                    3.7.4\n","spacy-legacy                             3.0.12\n","spacy-loggers                            1.0.5\n","Sphinx                                   5.0.2\n","sphinxcontrib-applehelp                  1.0.8\n","sphinxcontrib-devhelp                    1.0.6\n","sphinxcontrib-htmlhelp                   2.0.5\n","sphinxcontrib-jsmath                     1.0.1\n","sphinxcontrib-qthelp                     1.0.7\n","sphinxcontrib-serializinghtml            1.1.10\n","SQLAlchemy                               2.0.30\n","sqlglot                                  20.11.0\n","sqlparse                                 0.5.0\n","srsly                                    2.4.8\n","stanio                                   0.5.0\n","starlette                                0.37.2\n","statsmodels                              0.14.2\n","StrEnum                                  0.4.15\n","sympy                                    1.12.1\n","tables                                   3.8.0\n","tabulate                                 0.9.0\n","tbb                                      2021.12.0\n","tblib                                    3.0.0\n","tenacity                                 8.3.0\n","tensorboard                              2.15.2\n","tensorboard-data-server                  0.7.2\n","tensorflow                               2.15.0\n","tensorflow-datasets                      4.9.5\n","tensorflow-estimator                     2.15.0\n","tensorflow-gcs-config                    2.15.0\n","tensorflow-hub                           0.16.1\n","tensorflow-io-gcs-filesystem             0.37.0\n","tensorflow-metadata                      1.15.0\n","tensorflow-probability                   0.23.0\n","tensorstore                              0.1.45\n","termcolor                                2.4.0\n","terminado                                0.18.1\n","text-unidecode                           1.3\n","textblob                                 0.17.1\n","tf_keras                                 2.15.1\n","tf-slim                                  1.1.0\n","thinc                                    8.2.3\n","threadpoolctl                            3.5.0\n","tifffile                                 2024.5.22\n","tiktoken                                 0.7.0\n","tinycss2                                 1.3.0\n","tokenizers                               0.19.1\n","toml                                     0.10.2\n","tomli                                    2.0.1\n","toolz                                    0.12.1\n","torch                                    2.3.0+cu121\n","torchaudio                               2.3.0+cu121\n","torchsummary                             1.5.1\n","torchtext                                0.18.0\n","torchvision                              0.18.0+cu121\n","tornado                                  6.3.3\n","tqdm                                     4.66.4\n","traitlets                                5.7.1\n","traittypes                               0.2.1\n","transformers                             4.41.2\n","triton                                   2.3.0\n","tweepy                                   4.14.0\n","typer                                    0.12.3\n","types-pytz                               2024.1.0.20240417\n","types-requests                           2.32.0.20240602\n","types-setuptools                         70.0.0.20240524\n","typing_extensions                        4.12.1\n","typing-inspect                           0.9.0\n","tzdata                                   2024.1\n","tzlocal                                  5.2\n","uc-micro-py                              1.0.3\n","ujson                                    5.10.0\n","uritemplate                              4.1.1\n","urllib3                                  2.0.7\n","uvicorn                                  0.30.1\n","uvloop                                   0.19.0\n","vega-datasets                            0.9.0\n","wadllib                                  1.3.6\n","wasabi                                   1.1.3\n","watchfiles                               0.22.0\n","wcwidth                                  0.2.13\n","weasel                                   0.3.4\n","webcolors                                1.13\n","webencodings                             0.5.1\n","websocket-client                         1.8.0\n","websockets                               12.0\n","Werkzeug                                 3.0.3\n","wheel                                    0.43.0\n","widgetsnbextension                       3.6.6\n","wordcloud                                1.9.3\n","wrapt                                    1.14.1\n","xarray                                   2023.7.0\n","xarray-einstats                          0.7.0\n","xgboost                                  2.0.3\n","xlrd                                     2.0.1\n","xyzservices                              2024.4.0\n","yarl                                     1.9.4\n","yellowbrick                              1.5\n","yfinance                                 0.2.40\n","zict                                     3.0.0\n","zipp                                     3.19.1\n"]}]},{"cell_type":"markdown","id":"5933dc47","metadata":{"id":"5933dc47"},"source":["我们需要设置环境变量`OPENAI_API_KEY`，可以直接设置，也可以从`.env`文件中加载，方法如下："]},{"cell_type":"code","execution_count":2,"id":"5dd7d92d","metadata":{"id":"5dd7d92d","executionInfo":{"status":"ok","timestamp":1718326693649,"user_tz":-480,"elapsed":5457,"user":{"displayName":"李辉","userId":"12972001611808140221"}}},"outputs":[],"source":["import os\n","from google.colab import userdata\n","os.environ[\"OPENAI_API_KEY\"] = userdata.get('OPENAI_API_KEY')\n","os.environ[\"OPENAI_API_BASE\"] = userdata.get('OPENAI_API_BASE')\n","os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n","os.environ[\"LANGCHAIN_API_KEY\"] = userdata.get('LANGCHAIN_API_KEY')"]},{"cell_type":"markdown","id":"8355704e","metadata":{"id":"8355704e"},"source":["### LangSmith\n","\n","使用LangChain构建的许多应用程序将都包含多个步骤和多次调用LLM调用。随着这些应用程序变得越来越复杂，能够检查chain或agent内部发生的细节变得至关重要。我们可以使用[LangSmith](https://smith.langchain.com/)查看应用程序的内部细节。\n","\n","请注意，LangSmith并非必需，它只是在我们开发调试应用的时候非常有用。如果想使用可以在[官网](https://smith.langchain.com/)注册后申请秘钥，每个月都会有一定的免费使用额度，足够我们学习和测试，将key设置在的环境变量中就可以轻松使用LangSmith，"]},{"cell_type":"markdown","id":"169b0d0e","metadata":{"id":"169b0d0e"},"source":["## Chains[​](https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/#chains)\n","\n","首先让我们看一下[上一讲](https://www.notion.so/RAG-LangChain-08a53958496f4d20913f855b92559c3c?pvs=21)提到的问答应用，本地知识库的文章还是使用[LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) 这篇文章。"]},{"cell_type":"code","execution_count":3,"id":"646e0af2","metadata":{"id":"646e0af2","executionInfo":{"status":"ok","timestamp":1718326697568,"user_tz":-480,"elapsed":1847,"user":{"displayName":"李辉","userId":"12972001611808140221"}}},"outputs":[],"source":["from langchain_openai import ChatOpenAI\n","\n","llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n"]},{"cell_type":"code","execution_count":4,"id":"5b6b7303","metadata":{"id":"5b6b7303","executionInfo":{"status":"ok","timestamp":1718326708962,"user_tz":-480,"elapsed":8746,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"colab":{"base_uri":"https://localhost:8080/"},"outputId":"c253738d-95a5-492b-ba63-3f3635f9b5a7"},"outputs":[{"output_type":"stream","name":"stderr","text":["WARNING:root:USER_AGENT environment variable not set, consider setting it to identify your requests.\n"]}],"source":["import bs4\n","from langchain import hub\n","from langchain.chains import create_retrieval_chain\n","from langchain.chains.combine_documents import create_stuff_documents_chain\n","from langchain_chroma import Chroma\n","from langchain_community.document_loaders import WebBaseLoader\n","from langchain_core.prompts import ChatPromptTemplate\n","from langchain_openai import OpenAIEmbeddings\n","from langchain_text_splitters import RecursiveCharacterTextSplitter\n","\n","# 1. Load, chunk and index the contents of the blog to create a retriever.\n","loader = WebBaseLoader(\n","    web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n","    bs_kwargs=dict(\n","        parse_only=bs4.SoupStrainer(\n","            class_=(\"post-content\", \"post-title\", \"post-header\")\n",")\n","),\n",")\n","docs = loader.load()\n","\n","text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n","splits = text_splitter.split_documents(docs)\n","vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n","retriever = vectorstore.as_retriever()\n","\n","# 2. Incorporate the retriever into a question-answering chain.\n","system_prompt = (\n","\"You are an assistant for question-answering tasks. \"\n","\"Use the following pieces of retrieved context to answer \"\n","\"the question. If you don't know the answer, say that you \"\n","\"don't know. Use three sentences maximum and keep the \"\n","\"answer concise.\"\n","\"\\n\\n\"\n","\"{context}\"\n",")\n","\n","prompt = ChatPromptTemplate.from_messages(\n","[\n","(\"system\", system_prompt),\n","(\"human\", \"{input}\"),\n","]\n",")\n","\n","question_answer_chain = create_stuff_documents_chain(llm, prompt)\n","rag_chain = create_retrieval_chain(retriever, question_answer_chain)"]},{"cell_type":"markdown","id":"40efa56a","metadata":{"id":"40efa56a"},"source":["**API 调用:**[create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) | [create_stuff_documents_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) | [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)"]},{"cell_type":"code","execution_count":null,"id":"9adecd4a","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":70},"id":"9adecd4a","executionInfo":{"status":"ok","timestamp":1718198029009,"user_tz":-480,"elapsed":5200,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"aeee93fe-4ddd-477e-c694-3f5ec5583674"},"outputs":[{"output_type":"execute_result","data":{"text/plain":["'Task decomposition is the process of breaking down a complex task into smaller and more manageable subtasks. This can be done using techniques such as Chain of Thought (CoT) or Tree of Thoughts, which help the agent to think step by step and explore multiple reasoning possibilities at each step. Task decomposition can be carried out by language model models with simple prompting, task-specific instructions, or human inputs.'"],"application/vnd.google.colaboratory.intrinsic+json":{"type":"string"}},"metadata":{},"execution_count":6}],"source":["response = rag_chain.invoke({\"input\": \"What is Task Decomposition?\"})\n","response[\"answer\"]"]},{"cell_type":"markdown","id":"a2ca3acb","metadata":{"id":"a2ca3acb"},"source":["```\n","\"Task decomposition involves breaking down complex tasks into smaller and simpler steps to make them more manageable for an agent or model. This process helps in guiding the agent through the various subgoals required to achieve the overall task efficiently. Different techniques like Chain of Thought and Tree of Thoughts can be used to decompose tasks into step-by-step processes, enhancing performance and understanding of the model's thinking process.\"\n","```\n","\n","我们使用了内置的chain构造函数`create_stuff_documents_chain`和`create_retrieval_chain`， `rag_chain` 的组成成员：\n","\n","1. retriever检索器；\n","2. prompt提示词模板；\n","3. LLM。\n","\n","这将简化合并聊天记录的过程。\n","\n","### 添加历史对话\n","\n","上面我们构建的链是使用输入问题来检索知识库相关的上下文，但在对话环境中，用户的问题可能是基于对话的上下文才。例如：\n","\n","> Human: \"What is Task Decomposition?\"\n",">\n",">\n","> AI: \"Task decomposition involves breaking down complex tasks into smaller and simpler steps to make them more manageable for an agent or model.\"\n",">\n","> Human: \"What are common ways of doing it?\"\n",">\n","\n","为了理解第二个问题，我们的应用需要理解 \"it\" 代指的是 \"Task Decomposition.\"\n","\n","We'll need to update two things about our existing app:\n","\n","在我们现有的应用代码中我们需要更改两块内容：\n","\n","1. **Prompt**: 更新我们的提示词模板，去支持历史消息作为输入。\n","2. **Contextualizing questions（将问题放在上下文中重新表述）**: 添加一个子链，它获取最新的用户问题，并将其放在聊天记录的上下文中重新表述。这可以简单地被看作是构建一个新的“历史对话”的检索器。\n","\n","之前的流程：\n","\n","- `query` -> `retriever`\n","\n","之后的流程：\n","\n","- `(query, conversation history)` -> `LLM` -> `rephrased query` -> `retriever`\n","\n","### **将问题放在上下文中重新表述**\n","\n","首先，我们需要定义一个子链，它接收历史消息和最新用户问题，并且如果问题中涉及历史信息，就重新表述问题。\n","\n","我们将需要传入一个包含名为“chat_history”的 `MessagesPlaceholder` 提示词模板变量。这样，我们可以使用“chat_history”输入键将消息列表传递给提示词模板，并且插入这些消息在系统消息之后和最新问题之前。\n","\n","我们在代码中使用了 [`create_history_aware_retriever`](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) 函数，组成的retriever链会依次调用 `prompt | llm | StrOutputParser() | retriever`。调用链需要传入 `input` 和 `chat_history` 参数，他的输出形式与`retriever`相同。"]},{"cell_type":"code","execution_count":5,"id":"8f0af995","metadata":{"id":"8f0af995","executionInfo":{"status":"ok","timestamp":1718326710014,"user_tz":-480,"elapsed":3,"user":{"displayName":"李辉","userId":"12972001611808140221"}}},"outputs":[],"source":["from langchain.chains import create_history_aware_retriever\n","from langchain_core.prompts import MessagesPlaceholder\n","\n","contextualize_q_system_prompt = (\n","\"Given a chat history and the latest user question \"\n","\"which might reference context in the chat history, \"\n","\"formulate a standalone question which can be understood \"\n","\"without the chat history. Do NOT answer the question, \"\n","\"just reformulate it if needed and otherwise return it as is.\"\n",")\n","\n","contextualize_q_prompt = ChatPromptTemplate.from_messages(\n","[\n","(\"system\", contextualize_q_system_prompt),\n","        MessagesPlaceholder(\"chat_history\"),\n","(\"human\", \"{input}\"),\n","]\n",")\n","history_aware_retriever = create_history_aware_retriever(\n","    llm, retriever, contextualize_q_prompt\n",")"]},{"cell_type":"markdown","id":"3caecbd8","metadata":{"id":"3caecbd8"},"source":["**API 调用:**[create_history_aware_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html)\n","\n","这个链中，在执行本地知识库检索器的前面，添加根据历史对话重新生成的问题表述，以便检索过程能够整合对话的上下文。\n","\n","现在我们可以建立完整的问答链。只需要更新检索器为我们的新`history_aware_retriever`。\n","\n","我们还是使用 [`create_stuff_documents_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) 来生成一个 `question_answer_chain`，这个链其实只需要关注模型输入输出的内容，关于知识库具体的查询逻辑并不关心。需要传入的参数：知识库检索上下文 `context`, 历史对话`chat_history` 和输入问题 `input` 。\n","\n","我们使用 [`create_retrieval_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) 方法构建最终的 `rag_chain`。构建方法需要传入`history_aware_retriever`和`question_answer_chain` 。调用`rag_chain` 时需要传入：问题`input`和历史对话`chat_history`，输出包括：问题`input`、历史对话`chat_history`、知识库检索到的上下文`context`和最终回答 `answer`。"]},{"cell_type":"code","execution_count":6,"id":"1fee0a71","metadata":{"id":"1fee0a71","executionInfo":{"status":"ok","timestamp":1718326713654,"user_tz":-480,"elapsed":466,"user":{"displayName":"李辉","userId":"12972001611808140221"}}},"outputs":[],"source":["from langchain.chains import create_retrieval_chain\n","from langchain.chains.combine_documents import create_stuff_documents_chain\n","\n","qa_prompt = ChatPromptTemplate.from_messages(\n","[\n","(\"system\", system_prompt),\n","        MessagesPlaceholder(\"chat_history\"),\n","(\"human\", \"{input}\"),\n","]\n",")\n","\n","question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)\n","rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)"]},{"cell_type":"markdown","id":"af4ddaf9","metadata":{"id":"af4ddaf9"},"source":["**API 调用:**[create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) | [create_stuff_documents_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html)\n","\n","让我们尝试调用一下。下面我们提出一个问题和一个需要上下文的后续问题，看看是否能够返回正确的回答。"]},{"cell_type":"code","execution_count":7,"id":"1783bdab","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"1783bdab","executionInfo":{"status":"ok","timestamp":1718326736257,"user_tz":-480,"elapsed":8073,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"4cf4041b-ec2e-49c7-de2f-cd5411f315a6"},"outputs":[{"output_type":"stream","name":"stdout","text":["Task decomposition can be achieved through several common methods, including:\n","1. Using Language Model (LLM) with simple prompting like \"Steps for XYZ\" or asking for subgoals.\n","2. Providing task-specific instructions tailored to the desired outcome, such as \"Write a story outline\" for novel writing.\n","3. Incorporating human inputs to guide the decomposition process effectively.\n"]}],"source":["from langchain_core.messages import AIMessage, HumanMessage\n","\n","chat_history = []\n","\n","question = \"What is Task Decomposition?\"\n","ai_msg_1 = rag_chain.invoke({\"input\": question, \"chat_history\": chat_history})\n","chat_history.extend(\n","[\n","        HumanMessage(content=question),\n","        AIMessage(content=ai_msg_1[\"answer\"]),\n","]\n",")\n","\n","second_question = \"What are common ways of doing it?\"\n","ai_msg_2 = rag_chain.invoke({\"input\": second_question, \"chat_history\": chat_history})\n","\n","print(ai_msg_2[\"answer\"])\n"]},{"cell_type":"markdown","id":"60ec273d","metadata":{"id":"60ec273d"},"source":["\n","langsmith\n","\n","### 聊天记录状态管理\n","\n","上面我们介绍了如何向 `rag_chain` 中加入历史输出中，但我们仍然是手动更新聊天记录。在一个真正的问答应用程序中，我们希望有一种持久化聊天记录的方式，并且有一种自动插入和更新它的方式。\n","\n","这里我们可以使用：\n","\n","- [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.memory): 保存聊天的历史记录\n","- [RunnableWithMessageHistory](https://python.langchain.com/v0.2/docs/how_to/message_history/): 将 `BaseChatMessageHistory` 包装成可以加入`rag_chain` 链中的对象，实现自动将聊天记录注入到提示词模板，并在每次调用后更新聊天记录。\n","\n","要详细了解如何添加的历史消息并加入链中的，可以查看具体的官方文档[How to add message history (memory)](https://python.langchain.com/v0.2/docs/how_to/message_history/)。\n","\n","以下，我们使用将聊天记录存储在一个简单的字典 `store` 中。LangChain 也支持与Redis和其他技术的内存集成。\n","\n","接下来，我们定义一个`RunnableWithMessageHistory`的实例来帮助管理聊天记录。他需要一个配置参数，使用一个键（默认为“session_id”）来指定要获取的对话历史，并将其添加到输入的问题之前，同时将模型的输出追加到相同的对话历史中。"]},{"cell_type":"code","execution_count":8,"id":"e08e0807","metadata":{"id":"e08e0807","executionInfo":{"status":"ok","timestamp":1718326755518,"user_tz":-480,"elapsed":437,"user":{"displayName":"李辉","userId":"12972001611808140221"}}},"outputs":[],"source":["from langchain_community.chat_message_histories import ChatMessageHistory\n","from langchain_core.chat_history import BaseChatMessageHistory\n","from langchain_core.runnables.history import RunnableWithMessageHistory\n","\n","store = {}\n","\n","def get_session_history(session_id: str) -> BaseChatMessageHistory:\n","  if session_id not in store:\n","    store[session_id] = ChatMessageHistory()\n","  return store[session_id]\n","\n","conversational_rag_chain = RunnableWithMessageHistory(\n","    rag_chain,\n","    get_session_history,\n","    input_messages_key=\"input\",\n","    history_messages_key=\"chat_history\",\n","    output_messages_key=\"answer\",\n",")\n"]},{"cell_type":"markdown","id":"99faca48","metadata":{"id":"99faca48"},"source":["**API 集成:**[ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html)"]},{"cell_type":"code","execution_count":null,"id":"4962c448","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":87},"id":"4962c448","executionInfo":{"status":"ok","timestamp":1718199789109,"user_tz":-480,"elapsed":6163,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"55ecc436-709a-4e1b-d10b-23de8c2d22b9"},"outputs":[{"output_type":"stream","name":"stderr","text":["WARNING:langchain_core.tracers.core:Parent run 250e5e7f-0ba5-4299-b11b-8302f0765199 not found for run 9969f5ef-d1af-447a-8ecd-9df81950364a. Treating as a root run.\n"]},{"output_type":"execute_result","data":{"text/plain":["'Task decomposition is the process of breaking down complex tasks into smaller and simpler steps to make them more manageable for an agent or model. It involves transforming big tasks into multiple smaller tasks, allowing for a more systematic approach to problem-solving. Task decomposition can be achieved through techniques like Chain of Thought (CoT) or Tree of Thoughts, which help in organizing the steps needed to accomplish a larger goal.'"],"application/vnd.google.colaboratory.intrinsic+json":{"type":"string"}},"metadata":{},"execution_count":12}],"source":["conversational_rag_chain.invoke(\n","{\"input\": \"What is Task Decomposition?\"},\n","config={\n","  \"configurable\": {\"session_id\": \"abc123\"}\n","},  # constructs a key \"abc123\" in `store`.\n",")[\"answer\"]\n"]},{"cell_type":"code","execution_count":null,"id":"443bc7d8","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":87},"id":"443bc7d8","executionInfo":{"status":"ok","timestamp":1718199819109,"user_tz":-480,"elapsed":5454,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"4628c070-3684-4c29-8491-32cfb75f5da1"},"outputs":[{"output_type":"stream","name":"stderr","text":["WARNING:langchain_core.tracers.core:Parent run 2435d17c-a916-49c1-be46-4fd52780b7b2 not found for run 799cd50f-e51b-4270-9d47-e8e7122e0087. Treating as a root run.\n"]},{"output_type":"execute_result","data":{"text/plain":["'Task decomposition can be done in several common ways:\\n\\n1. Using techniques like Chain of Thought (CoT) or Tree of Thoughts to break down complex tasks into smaller, more manageable steps.\\n2. Providing simple prompts to guide the decomposition process, such as asking for subgoals or steps needed to achieve a specific task.\\n3. Utilizing task-specific instructions tailored to the nature of the task, such as requesting a story outline for writing a novel.'"],"application/vnd.google.colaboratory.intrinsic+json":{"type":"string"}},"metadata":{},"execution_count":13}],"source":["conversational_rag_chain.invoke(\n","{\"input\": \"What are common ways of doing it?\"},\n","config={\"configurable\": {\"session_id\": \"abc123\"}},\n",")[\"answer\"]\n"]},{"cell_type":"code","execution_count":null,"id":"611a055c","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"611a055c","executionInfo":{"status":"ok","timestamp":1718199830480,"user_tz":-480,"elapsed":446,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"2c3bd2bb-49ea-4fad-c134-a5044cfc1399"},"outputs":[{"output_type":"stream","name":"stdout","text":["User: What is Task Decomposition?\n","\n","AI: Task decomposition is the process of breaking down complex tasks into smaller and simpler steps to make them more manageable for an agent or model. It involves transforming big tasks into multiple smaller tasks, allowing for a more systematic approach to problem-solving. Task decomposition can be achieved through techniques like Chain of Thought (CoT) or Tree of Thoughts, which help in organizing the steps needed to accomplish a larger goal.\n","\n","User: What are common ways of doing it?\n","\n","AI: Task decomposition can be done in several common ways:\n","\n","1. Using techniques like Chain of Thought (CoT) or Tree of Thoughts to break down complex tasks into smaller, more manageable steps.\n","2. Providing simple prompts to guide the decomposition process, such as asking for subgoals or steps needed to achieve a specific task.\n","3. Utilizing task-specific instructions tailored to the nature of the task, such as requesting a story outline for writing a novel.\n","\n"]}],"source":["for message in store[\"abc123\"].messages:\n","  if isinstance(message, AIMessage):\n","    prefix = \"AI\"\n","  else:\n","    prefix = \"User\"\n","\n","  print(f\"{prefix}: {message.content}\\n\")\n"]},{"cell_type":"markdown","id":"b03b53c7","metadata":{"id":"b03b53c7"},"source":["\n","### 整合\n","\n","![https://python.langchain.com/v0.2/assets/images/conversational_retrieval_chain-5c7a96abe29e582bc575a0a0d63f86b0.png](https://python.langchain.com/v0.2/assets/images/conversational_retrieval_chain-5c7a96abe29e582bc575a0a0d63f86b0.png)\n","\n","为了方便查看，我们整合一下上面的代码："]},{"cell_type":"code","execution_count":null,"id":"d66c1a2a","metadata":{"id":"d66c1a2a"},"outputs":[],"source":["import bs4\n","from langchain.chains import create_history_aware_retriever, create_retrieval_chain\n","from langchain.chains.combine_documents import create_stuff_documents_chain\n","from langchain_chroma import Chroma\n","from langchain_community.chat_message_histories import ChatMessageHistory\n","from langchain_community.document_loaders import WebBaseLoader\n","from langchain_core.chat_history import BaseChatMessageHistory\n","from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n","from langchain_core.runnables.history import RunnableWithMessageHistory\n","from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n","from langchain_text_splitters import RecursiveCharacterTextSplitter\n","\n","llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n","\n","### Construct retriever ###\n","loader = WebBaseLoader(\n","    web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n","    bs_kwargs=dict(\n","        parse_only=bs4.SoupStrainer(\n","            class_=(\"post-content\", \"post-title\", \"post-header\")\n",")\n","),\n",")\n","docs = loader.load()\n","\n","text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n","splits = text_splitter.split_documents(docs)\n","vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n","retriever = vectorstore.as_retriever()\n","\n","### Contextualize question ###\n","contextualize_q_system_prompt = (\n","\"Given a chat history and the latest user question \"\n","\"which might reference context in the chat history, \"\n","\"formulate a standalone question which can be understood \"\n","\"without the chat history. Do NOT answer the question, \"\n","\"just reformulate it if needed and otherwise return it as is.\"\n",")\n","contextualize_q_prompt = ChatPromptTemplate.from_messages(\n","[\n","(\"system\", contextualize_q_system_prompt),\n","        MessagesPlaceholder(\"chat_history\"),\n","(\"human\", \"{input}\"),\n","]\n",")\n","history_aware_retriever = create_history_aware_retriever(\n","    llm, retriever, contextualize_q_prompt\n",")\n","\n","### Answer question ###\n","system_prompt = (\n","\"You are an assistant for question-answering tasks. \"\n","\"Use the following pieces of retrieved context to answer \"\n","\"the question. If you don't know the answer, say that you \"\n","\"don't know. Use three sentences maximum and keep the \"\n","\"answer concise.\"\n","\"\\n\\n\"\n","\"{context}\"\n",")\n","qa_prompt = ChatPromptTemplate.from_messages(\n","[\n","(\"system\", system_prompt),\n","        MessagesPlaceholder(\"chat_history\"),\n","(\"human\", \"{input}\"),\n","]\n",")\n","question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)\n","\n","rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)\n","\n","### Statefully manage chat history ###\n","store = {}\n","\n","def get_session_history(session_id: str) -> BaseChatMessageHistory:\n","  if session_id not in store:\n","        store[session_id] = ChatMessageHistory()\n","  return store[session_id]\n","\n","conversational_rag_chain = RunnableWithMessageHistory(\n","    rag_chain,\n","    get_session_history,\n","    input_messages_key=\"input\",\n","    history_messages_key=\"chat_history\",\n","    output_messages_key=\"answer\",\n",")\n"]},{"cell_type":"markdown","id":"f0d609f4","metadata":{"id":"f0d609f4"},"source":["**API 调用:**[create_history_aware_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) | [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) | [create_stuff_documents_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) | [ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)"]},{"cell_type":"code","execution_count":null,"id":"b06a1882","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":107},"id":"b06a1882","executionInfo":{"status":"ok","timestamp":1718160074672,"user_tz":-480,"elapsed":4024,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"1b1667c5-022d-4b8d-9935-14d8eb417201"},"outputs":[{"output_type":"stream","name":"stderr","text":["WARNING:langchain_core.tracers.core:Parent run 0540f3be-d669-486d-8c2c-cea41026b664 not found for run 6a6ab04d-3344-466e-ad98-28934c940f34. Treating as a root run.\n"]},{"output_type":"execute_result","data":{"text/plain":["'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It involves transforming big tasks into multiple manageable tasks to facilitate problem-solving. This process can be done through prompting techniques like Chain of Thought or Tree of Thoughts, which help agents plan and execute tasks effectively.'"],"application/vnd.google.colaboratory.intrinsic+json":{"type":"string"}},"metadata":{},"execution_count":33}],"source":["conversational_rag_chain.invoke(\n","{\"input\": \"What is Task Decomposition?\"},\n","    config={\n","\"configurable\": {\"session_id\": \"abc123\"}\n","},  # constructs a key \"abc123\" in `store`.\n",")[\"answer\"]"]},{"cell_type":"code","execution_count":null,"id":"f9e185a1","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":124},"id":"f9e185a1","executionInfo":{"status":"ok","timestamp":1718160128910,"user_tz":-480,"elapsed":23828,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"3a59e8ec-b260-49aa-fe0e-57ad64bc79cf"},"outputs":[{"output_type":"stream","name":"stderr","text":["WARNING:langchain_core.tracers.core:Parent run 0f47d0c2-99b2-4c0e-84d4-57c07f2f0ca3 not found for run 643c3815-6d39-4507-8f27-ebc6e7ecefcb. Treating as a root run.\n"]},{"output_type":"execute_result","data":{"text/plain":["'Task decomposition can be achieved through various methods such as using prompting techniques like Chain of Thought or Tree of Thoughts, which guide models to break down tasks into smaller steps. Additionally, task-specific instructions can be provided to help agents understand how to decompose a task effectively, such as asking them to \"Write a story outline\" for writing a novel. Human inputs can also be used to decompose tasks into manageable components.'"],"application/vnd.google.colaboratory.intrinsic+json":{"type":"string"}},"metadata":{},"execution_count":34}],"source":["conversational_rag_chain.invoke(\n","{\"input\": \"What are common ways of doing it?\"},\n","    config={\"configurable\": {\"session_id\": \"abc123\"}},\n",")[\"answer\"]"]},{"cell_type":"markdown","source":["上面我们搭建好了一个拥有记忆功能并检索本地知识库的应用，但是这个问答应用不管我们询问什么问题都会检索本地知识库，其实我们在实际的使用过程中可能只有某几个相关的问题才有必要使用本地知识库检索，那我们能不能让大模型自己判断回答我们的问题是不是需要执行检索？"],"metadata":{"id":"xNloRfUoU5dL"},"id":"xNloRfUoU5dL"},{"cell_type":"markdown","source":["## Agents 智能体[](https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/#agents)\n","\n","Agents智能体 可以利用LLM的推理能力在执行过程中做出决策。使用智能体可以让我们在检索过程中由大模型自主判断是否需要检索。尽管大模型的行为与定义好的链相比更加不可预测，但是这也提供了一些优势。\n","\n","- Agents智能体直接生成输入以供检索器使用，而不一定需要我们明确构建上下文;\n","- 智能体可以根据情况查询执行多个检索步骤，或完全不执行检索步骤（例如，响应用户的问候语）。\n","\n","### Retrieval tool 检索器工具[](https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/#retrieval-tool)\n","\n","Agents 可以访问并使用工具，我们将上面的本地知识库检索器 `retriever` 封装成智能体可以使用的工具。"],"metadata":{"id":"hc1FxrcRU-c8"},"id":"hc1FxrcRU-c8"},{"cell_type":"code","execution_count":9,"id":"11e6d566","metadata":{"id":"11e6d566","executionInfo":{"status":"ok","timestamp":1718326776217,"user_tz":-480,"elapsed":432,"user":{"displayName":"李辉","userId":"12972001611808140221"}}},"outputs":[],"source":["from langchain.tools.retriever import create_retriever_tool\n","\n","tool = create_retriever_tool(\n","    retriever,\n","\"blog_post_retriever\",\n","\"Searches and returns excerpts from the Autonomous Agents blog post.\",\n",")\n","tools = [tool]"]},{"cell_type":"markdown","id":"dd37e1fc","metadata":{"id":"dd37e1fc"},"source":["**API 调用:**[create_retriever_tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.create_retriever_tool.html)\n","\n","定义好的工具实现了 [Runnables](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language) 相关的接口，可以直接使用 `invoke` 等方法调用"]},{"cell_type":"code","execution_count":null,"id":"7ffc49a1","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":122},"id":"7ffc49a1","executionInfo":{"status":"ok","timestamp":1718200134313,"user_tz":-480,"elapsed":1591,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"9fe36e63-52d7-46b8-beee-6babb7ddf2a6"},"outputs":[{"output_type":"execute_result","data":{"text/plain":["'Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.\\n\\nFig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:'"],"application/vnd.google.colaboratory.intrinsic+json":{"type":"string"}},"metadata":{},"execution_count":18}],"source":["tool.invoke(\"task decomposition\")"]},{"cell_type":"markdown","id":"30b18685","metadata":{"id":"30b18685"},"source":["\n","### 构造 Agent智能体[](https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/#agent-constructor)\n","\n","现在我们已经定义了工具和大型语言模型（LLM），可以创建智能体。我们使用LangGraph来构建这个代理。目前，我们直接使用一个高级接口来构建智能体，但LangGraph的优势在于，这个高级接口背后有一个低级的、高度可控制的API，以便在需要时可以修改代理的逻辑。"]},{"cell_type":"code","execution_count":12,"id":"fbc24e39","metadata":{"id":"fbc24e39","executionInfo":{"status":"ok","timestamp":1718326791857,"user_tz":-480,"elapsed":425,"user":{"displayName":"李辉","userId":"12972001611808140221"}}},"outputs":[],"source":["from langgraph.prebuilt import create_react_agent\n","\n","agent_executor = create_react_agent(llm, tools)"]},{"cell_type":"markdown","id":"05ab72bd","metadata":{"id":"05ab72bd"},"source":["LangGraph自带持久性功能，所以不需要使用 `ChatMessageHistory` 。我们可以直接将一个检查点器（ `checkpointer`）传递给我们的LangGraph代理。"]},{"cell_type":"code","execution_count":11,"id":"ad8b490f","metadata":{"id":"ad8b490f","executionInfo":{"status":"ok","timestamp":1718326785754,"user_tz":-480,"elapsed":4,"user":{"displayName":"李辉","userId":"12972001611808140221"}}},"outputs":[],"source":["from langgraph.checkpoint.sqlite import SqliteSaver\n","\n","memory = SqliteSaver.from_conn_string(\":memory:\")\n","\n","agent_executor = create_react_agent(llm, tools, checkpointer=memory)"]},{"cell_type":"markdown","id":"ce97dda0","metadata":{"id":"ce97dda0"},"source":["我们尝试执行一下agent，如果我们输入的查询不需要执行检索步骤，那么智能体就不会执行检索步骤。"]},{"cell_type":"code","execution_count":13,"id":"c877b11a","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"c877b11a","executionInfo":{"status":"ok","timestamp":1718326796584,"user_tz":-480,"elapsed":2117,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"856e74cd-cf10-4284-840f-3f5e933c114d"},"outputs":[{"output_type":"stream","name":"stdout","text":["{'agent': {'messages': [AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 67, 'total_tokens': 78}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-52df3916-e45d-483e-9b6c-dd2b7d864940-0', usage_metadata={'input_tokens': 67, 'output_tokens': 11, 'total_tokens': 78})]}}\n","----\n"]}],"source":["config = {\"configurable\": {\"thread_id\": \"abc123\"}}\n","\n","for s in agent_executor.stream(\n","{\"messages\": [HumanMessage(content=\"Hi! I'm bob\")]}, config=config\n","):\n","  print(s)\n","  print(\"----\")"]},{"cell_type":"markdown","id":"8e53a8cd","metadata":{"id":"8e53a8cd"},"source":["\n","此外，如果我们输入的查询需要执行检索步骤，agent将生成输入到工具中。"]},{"cell_type":"code","execution_count":null,"id":"6051551d","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"6051551d","executionInfo":{"status":"ok","timestamp":1718200359392,"user_tz":-480,"elapsed":6746,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"9e25a497-4a77-4e0b-ac85-2f4ba527bbf0"},"outputs":[{"output_type":"stream","name":"stdout","text":["{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_77rTwT2Qlk7ZxgOITlJh6Pzl', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 92, 'total_tokens': 111}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_811936bd4f', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a1ce3179-3378-446c-b39c-aab63eeda001-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_77rTwT2Qlk7ZxgOITlJh6Pzl'}], usage_metadata={'input_tokens': 92, 'output_tokens': 19, 'total_tokens': 111})]}}\n","----\n","{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.\\n\\nFig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:', name='blog_post_retriever', tool_call_id='call_77rTwT2Qlk7ZxgOITlJh6Pzl')]}}\n","----\n","{'agent': {'messages': [AIMessage(content='Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable. This approach helps in enhancing model performance on complex tasks by instructing the model to \"think step by step\" and decompose hard tasks into smaller components. This process transforms big tasks into multiple manageable tasks, shedding light on the model\\'s thinking process.\\n\\nThere are techniques like Chain of Thought and Tree of Thoughts that extend the concept of task decomposition. Chain of Thought instructs the model to think step by step, while Tree of Thoughts explores multiple reasoning possibilities at each step by creating a tree structure. Task decomposition can be done using simple prompting, task-specific instructions, or with human inputs.\\n\\nOverall, task decomposition is a strategy to break down complex tasks into smaller, more manageable steps for better understanding and execution.', response_metadata={'token_usage': {'completion_tokens': 161, 'prompt_tokens': 612, 'total_tokens': 773}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-74a9ffce-5625-471f-adb3-698b50af9b9c-0', usage_metadata={'input_tokens': 612, 'output_tokens': 161, 'total_tokens': 773})]}}\n","----\n"]}],"source":["query = \"What is Task Decomposition?\"\n","\n","for s in agent_executor.stream(\n","{\"messages\": [HumanMessage(content=query)]}, config=config\n","):\n","  print(s)\n","  print(\"----\")"]},{"cell_type":"markdown","id":"3a4dafad","metadata":{"id":"3a4dafad"},"source":["在上面的例子中，agent 并没有直接将我们的查询原封不动地插入到工具中，而是去除了像“what”和“is”这样的不必要的词汇。\n","\n","允许智能体在需要时利用对话的上下文。"]},{"cell_type":"code","execution_count":null,"id":"7ae9c447","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"7ae9c447","executionInfo":{"status":"ok","timestamp":1718200429800,"user_tz":-480,"elapsed":7241,"user":{"displayName":"李辉","userId":"12972001611808140221"}},"outputId":"ba324f38-7033-455b-c733-80ec937e9ad8"},"outputs":[{"output_type":"stream","name":"stdout","text":["{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_db6SM3QoPbNabE9gF3UUq8RW', 'function': {'arguments': '{\"query\":\"common ways of task decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 1488, 'total_tokens': 1509}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_811936bd4f', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-de500626-1fe2-47cb-93b2-cea08ea21d4f-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'common ways of task decomposition'}, 'id': 'call_db6SM3QoPbNabE9gF3UUq8RW'}], usage_metadata={'input_tokens': 1488, 'output_tokens': 21, 'total_tokens': 1509})]}}\n","----\n","{'tools': {'messages': [ToolMessage(content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\n\\nResources:\\n1. Internet access for searches and information gathering.\\n2. Long Term memory management.\\n3. GPT-3.5 powered Agents for delegation of simple tasks.\\n4. File output.\\n\\nPerformance Evaluation:\\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\\n2. Constructively self-criticize your big-picture behavior constantly.\\n3. Reflect on past decisions and strategies to refine your approach.\\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.', name='blog_post_retriever', tool_call_id='call_db6SM3QoPbNabE9gF3UUq8RW')]}}\n","----\n","{'agent': {'messages': [AIMessage(content='According to the blog post, common ways of task decomposition include:\\n\\n1. Using Chain of Thought (CoT) technique, which prompts the model to \"think step by step\" and decompose hard tasks into smaller and simpler steps.\\n2. Utilizing Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure and generating multiple thoughts per step.\\n3. Task decomposition can also be done using simple prompting, task-specific instructions, or with human inputs.\\n\\nThese methods of task decomposition aim to transform big tasks into multiple manageable tasks and shed light on the interpretation of the model\\'s thinking process.', response_metadata={'token_usage': {'completion_tokens': 124, 'prompt_tokens': 2033, 'total_tokens': 2157}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-931ffc1f-c5e1-4657-bade-ea33c5402d95-0', usage_metadata={'input_tokens': 2033, 'output_tokens': 124, 'total_tokens': 2157})]}}\n","----\n"]}],"source":["query = \"What according to the blog post are common ways of doing it? redo the search\"\n","\n","for s in agent_executor.stream(\n","{\"messages\": [HumanMessage(content=query)]}, config=config\n","):\n","  print(s)\n","  print(\"----\")"]},{"cell_type":"markdown","id":"b423756a","metadata":{"id":"b423756a"},"source":["\n","请注意，agent能够推断出我们查询中的“它”指的是“任务分解”，并因此生成了一个合理的搜索查询——在这个例子中是“任务分解的常见方法”。\n","\n","langsmith\n","\n","### 整合"]},{"cell_type":"code","execution_count":null,"id":"22949676","metadata":{"id":"22949676"},"outputs":[],"source":["import bs4\n","from langchain.agents import AgentExecutor, create_tool_calling_agent\n","from langchain.tools.retriever import create_retriever_tool\n","from langchain_chroma import Chroma\n","from langchain_community.chat_message_histories import ChatMessageHistory\n","from langchain_community.document_loaders import WebBaseLoader\n","from langchain_core.chat_history import BaseChatMessageHistory\n","from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n","from langchain_core.runnables.history import RunnableWithMessageHistory\n","from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n","from langchain_text_splitters import RecursiveCharacterTextSplitter\n","from langgraph.checkpoint.sqlite import SqliteSaver\n","\n","memory = SqliteSaver.from_conn_string(\":memory:\")\n","llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n","\n","### Construct retriever ###\n","loader = WebBaseLoader(\n","    web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n","    bs_kwargs=dict(\n","        parse_only=bs4.SoupStrainer(\n","            class_=(\"post-content\", \"post-title\", \"post-header\")\n",")\n","),\n",")\n","docs = loader.load()\n","\n","text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n","splits = text_splitter.split_documents(docs)\n","vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n","retriever = vectorstore.as_retriever()\n","\n","### Build retriever tool ###\n","tool = create_retriever_tool(\n","    retriever,\n","\"blog_post_retriever\",\n","\"Searches and returns excerpts from the Autonomous Agents blog post.\",\n",")\n","tools = [tool]\n","\n","agent_executor = create_react_agent(llm, tools, checkpointer=memory)"]},{"cell_type":"markdown","id":"b2ac9b23","metadata":{"id":"b2ac9b23"},"source":["**API 调用：**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html) | [create_tool_calling_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html) | [create_retriever_tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.create_retriever_tool.html) | [ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)\n","\n","## 总结\n","\n","在本文中，在RAG链式调用的基础上**添加了整合历史消息的逻辑**。然后构建了一个能够自主决定是否调用知识库查询工具的agent智能体。"]}],"metadata":{"jupytext":{"cell_metadata_filter":"-all","main_language":"python","notebook_metadata_filter":"-all"},"colab":{"provenance":[]},"language_info":{"name":"python"},"kernelspec":{"name":"python3","display_name":"Python 3"}},"nbformat":4,"nbformat_minor":5}