omarsol commited on
Commit
af538bd
1 Parent(s): 7bb4617

934adcc146427eab1ef3e2190ed8ffadfa25ab182e0dff5f3f1a80993fb577fa

Browse files
Files changed (50) hide show
  1. langchain_md_files/integrations/providers/tigris.mdx +19 -0
  2. langchain_md_files/integrations/providers/tomarkdown.mdx +16 -0
  3. langchain_md_files/integrations/providers/trello.mdx +22 -0
  4. langchain_md_files/integrations/providers/trubrics.mdx +24 -0
  5. langchain_md_files/integrations/providers/trulens.mdx +82 -0
  6. langchain_md_files/integrations/providers/twitter.mdx +25 -0
  7. langchain_md_files/integrations/providers/typesense.mdx +22 -0
  8. langchain_md_files/integrations/providers/unstructured.mdx +234 -0
  9. langchain_md_files/integrations/providers/upstash.mdx +221 -0
  10. langchain_md_files/integrations/providers/usearch.mdx +25 -0
  11. langchain_md_files/integrations/providers/vdms.mdx +62 -0
  12. langchain_md_files/integrations/providers/vectara/index.mdx +182 -0
  13. langchain_md_files/integrations/providers/vespa.mdx +21 -0
  14. langchain_md_files/integrations/providers/vlite.mdx +31 -0
  15. langchain_md_files/integrations/providers/voyageai.mdx +32 -0
  16. langchain_md_files/integrations/providers/weather.mdx +21 -0
  17. langchain_md_files/integrations/providers/weaviate.mdx +38 -0
  18. langchain_md_files/integrations/providers/whatsapp.mdx +18 -0
  19. langchain_md_files/integrations/providers/wikipedia.mdx +28 -0
  20. langchain_md_files/integrations/providers/wolfram_alpha.mdx +39 -0
  21. langchain_md_files/integrations/providers/writer.mdx +16 -0
  22. langchain_md_files/integrations/providers/xata.mdx +36 -0
  23. langchain_md_files/integrations/providers/xinference.mdx +102 -0
  24. langchain_md_files/integrations/providers/yandex.mdx +33 -0
  25. langchain_md_files/integrations/providers/yeagerai.mdx +43 -0
  26. langchain_md_files/integrations/providers/yi.mdx +23 -0
  27. langchain_md_files/integrations/providers/youtube.mdx +22 -0
  28. langchain_md_files/integrations/providers/zep.mdx +120 -0
  29. langchain_md_files/integrations/providers/zhipuai.mdx +18 -0
  30. langchain_md_files/integrations/providers/zilliz.mdx +22 -0
  31. langchain_md_files/integrations/retrievers/index.mdx +37 -0
  32. langchain_md_files/integrations/retrievers/self_query/index.mdx +11 -0
  33. langchain_md_files/integrations/text_embedding/index.mdx +18 -0
  34. langchain_md_files/integrations/vectorstores/index.mdx +17 -0
  35. langchain_md_files/introduction.mdx +98 -0
  36. langchain_md_files/people.mdx +46 -0
  37. langchain_md_files/tutorials/index.mdx +54 -0
  38. langchain_md_files/versions/overview.mdx +103 -0
  39. langchain_md_files/versions/release_policy.mdx +102 -0
  40. langchain_md_files/versions/v0_2/deprecations.mdx +902 -0
  41. langchain_md_files/versions/v0_2/index.mdx +93 -0
  42. langchain_md_files/versions/v0_2/migrating_astream_events.mdx +118 -0
  43. openai-cookbook_md_files/How_to_build_an_agent_with_the_node_sdk.mdx +492 -0
  44. openai-cookbook_md_files/vector_databases/supabase/semantic-search.mdx +276 -0
  45. trl_md_files/alignprop_trainer.mdx +91 -0
  46. trl_md_files/bco_trainer.mdx +139 -0
  47. trl_md_files/best_of_n.mdx +72 -0
  48. trl_md_files/callbacks.mdx +13 -0
  49. trl_md_files/clis.mdx +119 -0
  50. trl_md_files/cpo_trainer.mdx +113 -0
langchain_md_files/integrations/providers/tigris.mdx ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Tigris
2
+
3
+ > [Tigris](https://tigrisdata.com) is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
4
+ > `Tigris` eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.
5
+
6
+ ## Installation and Setup
7
+
8
+
9
+ ```bash
10
+ pip install tigrisdb openapi-schema-pydantic
11
+ ```
12
+
13
+ ## Vector Store
14
+
15
+ See a [usage example](/docs/integrations/vectorstores/tigris).
16
+
17
+ ```python
18
+ from langchain_community.vectorstores import Tigris
19
+ ```
langchain_md_files/integrations/providers/tomarkdown.mdx ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 2Markdown
2
+
3
+ >[2markdown](https://2markdown.com/) service transforms website content into structured markdown files.
4
+
5
+
6
+ ## Installation and Setup
7
+
8
+ We need the `API key`. See [instructions how to get it](https://2markdown.com/login).
9
+
10
+ ## Document Loader
11
+
12
+ See a [usage example](/docs/integrations/document_loaders/tomarkdown).
13
+
14
+ ```python
15
+ from langchain_community.document_loaders import ToMarkdownLoader
16
+ ```
langchain_md_files/integrations/providers/trello.mdx ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Trello
2
+
3
+ >[Trello](https://www.atlassian.com/software/trello) is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.
4
+ >The TrelloLoader allows us to load cards from a `Trello` board.
5
+
6
+
7
+ ## Installation and Setup
8
+
9
+ ```bash
10
+ pip install py-trello beautifulsoup4
11
+ ```
12
+
13
+ See [setup instructions](/docs/integrations/document_loaders/trello).
14
+
15
+
16
+ ## Document Loader
17
+
18
+ See a [usage example](/docs/integrations/document_loaders/trello).
19
+
20
+ ```python
21
+ from langchain_community.document_loaders import TrelloLoader
22
+ ```
langchain_md_files/integrations/providers/trubrics.mdx ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Trubrics
2
+
3
+
4
+ >[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user
5
+ prompts & feedback on AI models.
6
+ >
7
+ >Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`.
8
+
9
+ ## Installation and Setup
10
+
11
+ We need to install the `trubrics` Python package:
12
+
13
+ ```bash
14
+ pip install trubrics
15
+ ```
16
+
17
+
18
+ ## Callbacks
19
+
20
+ See a [usage example](/docs/integrations/callbacks/trubrics).
21
+
22
+ ```python
23
+ from langchain.callbacks import TrubricsCallbackHandler
24
+ ```
langchain_md_files/integrations/providers/trulens.mdx ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TruLens
2
+
3
+ >[TruLens](https://trulens.org) is an [open-source](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications.
4
+
5
+ This page covers how to use [TruLens](https://trulens.org) to evaluate and track LLM apps built on langchain.
6
+
7
+
8
+ ## Installation and Setup
9
+
10
+ Install the `trulens-eval` python package.
11
+
12
+ ```bash
13
+ pip install trulens-eval
14
+ ```
15
+
16
+ ## Quickstart
17
+
18
+ See the integration details in the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/).
19
+
20
+ ### Tracking
21
+
22
+ Once you've created your LLM chain, you can use TruLens for evaluation and tracking.
23
+ TruLens has a number of [out-of-the-box Feedback Functions](https://www.trulens.org/trulens_eval/evaluation/feedback_functions/),
24
+ and is also an extensible framework for LLM evaluation.
25
+
26
+ Create the feedback functions:
27
+
28
+ ```python
29
+ from trulens_eval.feedback import Feedback, Huggingface,
30
+
31
+ # Initialize HuggingFace-based feedback function collection class:
32
+ hugs = Huggingface()
33
+ openai = OpenAI()
34
+
35
+ # Define a language match feedback function using HuggingFace.
36
+ lang_match = Feedback(hugs.language_match).on_input_output()
37
+ # By default this will check language match on the main app input and main app
38
+ # output.
39
+
40
+ # Question/answer relevance between overall question and answer.
41
+ qa_relevance = Feedback(openai.relevance).on_input_output()
42
+ # By default this will evaluate feedback on main app input and main app output.
43
+
44
+ # Toxicity of input
45
+ toxicity = Feedback(openai.toxicity).on_input()
46
+ ```
47
+
48
+ ### Chains
49
+
50
+ After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with
51
+ TruChain to get detailed tracing, logging and evaluation of your LLM app.
52
+
53
+ Note: See code for the `chain` creation is in
54
+ the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/).
55
+
56
+ ```python
57
+ from trulens_eval import TruChain
58
+
59
+ # wrap your chain with TruChain
60
+ truchain = TruChain(
61
+ chain,
62
+ app_id='Chain1_ChatApplication',
63
+ feedbacks=[lang_match, qa_relevance, toxicity]
64
+ )
65
+ # Note: any `feedbacks` specified here will be evaluated and logged whenever the chain is used.
66
+ truchain("que hora es?")
67
+ ```
68
+
69
+ ### Evaluation
70
+
71
+ Now you can explore your LLM-based application!
72
+
73
+ Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record.
74
+
75
+ ```python
76
+ from trulens_eval import Tru
77
+
78
+ tru = Tru()
79
+ tru.run_dashboard() # open a Streamlit app to explore
80
+ ```
81
+
82
+ For more information on TruLens, visit [trulens.org](https://www.trulens.org/)
langchain_md_files/integrations/providers/twitter.mdx ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Twitter
2
+
3
+ >[Twitter](https://twitter.com/) is an online social media and social networking service.
4
+
5
+
6
+ ## Installation and Setup
7
+
8
+ ```bash
9
+ pip install tweepy
10
+ ```
11
+
12
+ We must initialize the loader with the `Twitter API` token, and we need to set up the Twitter `username`.
13
+
14
+
15
+ ## Document Loader
16
+
17
+ See a [usage example](/docs/integrations/document_loaders/twitter).
18
+
19
+ ```python
20
+ from langchain_community.document_loaders import TwitterTweetLoader
21
+ ```
22
+
23
+ ## Chat loader
24
+
25
+ See a [usage example](/docs/integrations/chat_loaders/twitter).
langchain_md_files/integrations/providers/typesense.mdx ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Typesense
2
+
3
+ > [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either
4
+ > [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run
5
+ > on [Typesense Cloud](https://cloud.typesense.org/).
6
+ > `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also
7
+ > focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
8
+
9
+ ## Installation and Setup
10
+
11
+
12
+ ```bash
13
+ pip install typesense openapi-schema-pydantic
14
+ ```
15
+
16
+ ## Vector Store
17
+
18
+ See a [usage example](/docs/integrations/vectorstores/typesense).
19
+
20
+ ```python
21
+ from langchain_community.vectorstores import Typesense
22
+ ```
langchain_md_files/integrations/providers/unstructured.mdx ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Unstructured
2
+
3
+ >The `unstructured` package from
4
+ [Unstructured.IO](https://www.unstructured.io/) extracts clean text from raw source documents like
5
+ PDFs and Word documents.
6
+ This page covers how to use the [`unstructured`](https://github.com/Unstructured-IO/unstructured)
7
+ ecosystem within LangChain.
8
+
9
+ ## Installation and Setup
10
+
11
+ If you are using a loader that runs locally, use the following steps to get `unstructured` and its
12
+ dependencies running.
13
+
14
+ - For the smallest installation footprint and to take advantage of features not available in the
15
+ open-source `unstructured` package, install the Python SDK with `pip install unstructured-client`
16
+ along with `pip install langchain-unstructured` to use the `UnstructuredLoader` and partition
17
+ remotely against the Unstructured API. This loader lives
18
+ in a LangChain partner repo instead of the `langchain-community` repo and you will need an
19
+ `api_key`, which you can generate a free key [here](https://unstructured.io/api-key/).
20
+ - Unstructured's documentation for the sdk can be found here:
21
+ https://docs.unstructured.io/api-reference/api-services/sdk
22
+
23
+ - To run everything locally, install the open-source python package with `pip install unstructured`
24
+ along with `pip install langchain-community` and use the same `UnstructuredLoader` as mentioned above.
25
+ - You can install document specific dependencies with extras, e.g. `pip install "unstructured[docx]"`.
26
+ - To install the dependencies for all document types, use `pip install "unstructured[all-docs]"`.
27
+ - Install the following system dependencies if they are not already available on your system with e.g. `brew install` for Mac.
28
+ Depending on what document types you're parsing, you may not need all of these.
29
+ - `libmagic-dev` (filetype detection)
30
+ - `poppler-utils` (images and PDFs)
31
+ - `tesseract-ocr`(images and PDFs)
32
+ - `qpdf` (PDFs)
33
+ - `libreoffice` (MS Office docs)
34
+ - `pandoc` (EPUBs)
35
+ - When running locally, Unstructured also recommends using Docker [by following this
36
+ guide](https://docs.unstructured.io/open-source/installation/docker-installation) to ensure all
37
+ system dependencies are installed correctly.
38
+
39
+ The Unstructured API requires API keys to make requests.
40
+ You can request an API key [here](https://unstructured.io/api-key-hosted) and start using it today!
41
+ Checkout the README [here](https://github.com/Unstructured-IO/unstructured-api) here to get started making API calls.
42
+ We'd love to hear your feedback, let us know how it goes in our [community slack](https://join.slack.com/t/unstructuredw-kbe4326/shared_invite/zt-1x7cgo0pg-PTptXWylzPQF9xZolzCnwQ).
43
+ And stay tuned for improvements to both quality and performance!
44
+ Check out the instructions
45
+ [here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image) if you'd like to self-host the Unstructured API or run it locally.
46
+
47
+
48
+ ## Data Loaders
49
+
50
+ The primary usage of `Unstructured` is in data loaders.
51
+
52
+ ### UnstructuredLoader
53
+
54
+ See a [usage example](/docs/integrations/document_loaders/unstructured_file) to see how you can use
55
+ this loader for both partitioning locally and remotely with the serverless Unstructured API.
56
+
57
+ ```python
58
+ from langchain_unstructured import UnstructuredLoader
59
+ ```
60
+
61
+ ### UnstructuredCHMLoader
62
+
63
+ `CHM` means `Microsoft Compiled HTML Help`.
64
+
65
+ ```python
66
+ from langchain_community.document_loaders import UnstructuredCHMLoader
67
+ ```
68
+
69
+ ### UnstructuredCSVLoader
70
+
71
+ A `comma-separated values` (`CSV`) file is a delimited text file that uses
72
+ a comma to separate values. Each line of the file is a data record.
73
+ Each record consists of one or more fields, separated by commas.
74
+
75
+ See a [usage example](/docs/integrations/document_loaders/csv#unstructuredcsvloader).
76
+
77
+ ```python
78
+ from langchain_community.document_loaders import UnstructuredCSVLoader
79
+ ```
80
+
81
+ ### UnstructuredEmailLoader
82
+
83
+ See a [usage example](/docs/integrations/document_loaders/email).
84
+
85
+ ```python
86
+ from langchain_community.document_loaders import UnstructuredEmailLoader
87
+ ```
88
+
89
+ ### UnstructuredEPubLoader
90
+
91
+ [EPUB](https://en.wikipedia.org/wiki/EPUB) is an `e-book file format` that uses
92
+ the “.epub” file extension. The term is short for electronic publication and
93
+ is sometimes styled `ePub`. `EPUB` is supported by many e-readers, and compatible
94
+ software is available for most smartphones, tablets, and computers.
95
+
96
+ See a [usage example](/docs/integrations/document_loaders/epub).
97
+
98
+ ```python
99
+ from langchain_community.document_loaders import UnstructuredEPubLoader
100
+ ```
101
+
102
+ ### UnstructuredExcelLoader
103
+
104
+ See a [usage example](/docs/integrations/document_loaders/microsoft_excel).
105
+
106
+ ```python
107
+ from langchain_community.document_loaders import UnstructuredExcelLoader
108
+ ```
109
+
110
+ ### UnstructuredFileIOLoader
111
+
112
+ See a [usage example](/docs/integrations/document_loaders/google_drive#passing-in-optional-file-loaders).
113
+
114
+ ```python
115
+ from langchain_community.document_loaders import UnstructuredFileIOLoader
116
+ ```
117
+
118
+ ### UnstructuredHTMLLoader
119
+
120
+ See a [usage example](/docs/how_to/document_loader_html).
121
+
122
+ ```python
123
+ from langchain_community.document_loaders import UnstructuredHTMLLoader
124
+ ```
125
+
126
+ ### UnstructuredImageLoader
127
+
128
+ See a [usage example](/docs/integrations/document_loaders/image).
129
+
130
+ ```python
131
+ from langchain_community.document_loaders import UnstructuredImageLoader
132
+ ```
133
+
134
+ ### UnstructuredMarkdownLoader
135
+
136
+ See a [usage example](/docs/integrations/vectorstores/starrocks).
137
+
138
+ ```python
139
+ from langchain_community.document_loaders import UnstructuredMarkdownLoader
140
+ ```
141
+
142
+ ### UnstructuredODTLoader
143
+
144
+ The `Open Document Format for Office Applications (ODF)`, also known as `OpenDocument`,
145
+ is an open file format for word processing documents, spreadsheets, presentations
146
+ and graphics and using ZIP-compressed XML files. It was developed with the aim of
147
+ providing an open, XML-based file format specification for office applications.
148
+
149
+ See a [usage example](/docs/integrations/document_loaders/odt).
150
+
151
+ ```python
152
+ from langchain_community.document_loaders import UnstructuredODTLoader
153
+ ```
154
+
155
+ ### UnstructuredOrgModeLoader
156
+
157
+ An [Org Mode](https://en.wikipedia.org/wiki/Org-mode) document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.
158
+
159
+ See a [usage example](/docs/integrations/document_loaders/org_mode).
160
+
161
+ ```python
162
+ from langchain_community.document_loaders import UnstructuredOrgModeLoader
163
+ ```
164
+
165
+ ### UnstructuredPDFLoader
166
+
167
+ See a [usage example](/docs/how_to/document_loader_pdf#using-unstructured).
168
+
169
+ ```python
170
+ from langchain_community.document_loaders import UnstructuredPDFLoader
171
+ ```
172
+
173
+ ### UnstructuredPowerPointLoader
174
+
175
+ See a [usage example](/docs/integrations/document_loaders/microsoft_powerpoint).
176
+
177
+ ```python
178
+ from langchain_community.document_loaders import UnstructuredPowerPointLoader
179
+ ```
180
+
181
+ ### UnstructuredRSTLoader
182
+
183
+ A `reStructured Text` (`RST`) file is a file format for textual data
184
+ used primarily in the Python programming language community for technical documentation.
185
+
186
+ See a [usage example](/docs/integrations/document_loaders/rst).
187
+
188
+ ```python
189
+ from langchain_community.document_loaders import UnstructuredRSTLoader
190
+ ```
191
+
192
+ ### UnstructuredRTFLoader
193
+
194
+ See a usage example in the API documentation.
195
+
196
+ ```python
197
+ from langchain_community.document_loaders import UnstructuredRTFLoader
198
+ ```
199
+
200
+ ### UnstructuredTSVLoader
201
+
202
+ A `tab-separated values` (`TSV`) file is a simple, text-based file format for storing tabular data.
203
+ Records are separated by newlines, and values within a record are separated by tab characters.
204
+
205
+ See a [usage example](/docs/integrations/document_loaders/tsv).
206
+
207
+ ```python
208
+ from langchain_community.document_loaders import UnstructuredTSVLoader
209
+ ```
210
+
211
+ ### UnstructuredURLLoader
212
+
213
+ See a [usage example](/docs/integrations/document_loaders/url).
214
+
215
+ ```python
216
+ from langchain_community.document_loaders import UnstructuredURLLoader
217
+ ```
218
+
219
+ ### UnstructuredWordDocumentLoader
220
+
221
+ See a [usage example](/docs/integrations/document_loaders/microsoft_word#using-unstructured).
222
+
223
+ ```python
224
+ from langchain_community.document_loaders import UnstructuredWordDocumentLoader
225
+ ```
226
+
227
+ ### UnstructuredXMLLoader
228
+
229
+ See a [usage example](/docs/integrations/document_loaders/xml).
230
+
231
+ ```python
232
+ from langchain_community.document_loaders import UnstructuredXMLLoader
233
+ ```
234
+
langchain_md_files/integrations/providers/upstash.mdx ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Upstash offers developers serverless databases and messaging
2
+ platforms to build powerful applications without having to worry
3
+ about the operational complexity of running databases at scale.
4
+
5
+ One significant advantage of Upstash is that their databases support HTTP and all of their SDKs use HTTP.
6
+ This means that you can run this in serverless platforms, edge or any platform that does not support TCP connections.
7
+
8
+ Currently, there are two Upstash integrations available for LangChain:
9
+ Upstash Vector as a vector embedding database and Upstash Redis as a cache and memory store.
10
+
11
+ # Upstash Vector
12
+
13
+ Upstash Vector is a serverless vector database that can be used to store and query vectors.
14
+
15
+ ## Installation
16
+
17
+ Create a new serverless vector database at the [Upstash Console](https://console.upstash.com/vector).
18
+ Select your preferred distance metric and dimension count according to your model.
19
+
20
+
21
+ Install the Upstash Vector Python SDK with `pip install upstash-vector`.
22
+ The Upstash Vector integration in langchain is a wrapper for the Upstash Vector Python SDK. That's why the `upstash-vector` package is required.
23
+
24
+ ## Integrations
25
+
26
+ Create a `UpstashVectorStore` object using credentials from the Upstash Console.
27
+ You also need to pass in an `Embeddings` object which can turn text into vector embeddings.
28
+
29
+ ```python
30
+ from langchain_community.vectorstores.upstash import UpstashVectorStore
31
+ import os
32
+
33
+ os.environ["UPSTASH_VECTOR_REST_URL"] = "<UPSTASH_VECTOR_REST_URL>"
34
+ os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "<UPSTASH_VECTOR_REST_TOKEN>"
35
+
36
+ store = UpstashVectorStore(
37
+ embedding=embeddings
38
+ )
39
+ ```
40
+
41
+ An alternative way of `UpstashVectorStore` is to pass `embedding=True`. This is a unique
42
+ feature of the `UpstashVectorStore` thanks to the ability of the Upstash Vector indexes
43
+ to have an associated embedding model. In this configuration, documents we want to insert or
44
+ queries we want to search for are simply sent to Upstash Vector as text. In the background,
45
+ Upstash Vector embeds these text and executes the request with these embeddings. To use this
46
+ feature, [create an Upstash Vector index by selecting a model](https://upstash.com/docs/vector/features/embeddingmodels#using-a-model)
47
+ and simply pass `embedding=True`:
48
+
49
+ ```python
50
+ from langchain_community.vectorstores.upstash import UpstashVectorStore
51
+ import os
52
+
53
+ os.environ["UPSTASH_VECTOR_REST_URL"] = "<UPSTASH_VECTOR_REST_URL>"
54
+ os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "<UPSTASH_VECTOR_REST_TOKEN>"
55
+
56
+ store = UpstashVectorStore(
57
+ embedding=True
58
+ )
59
+ ```
60
+
61
+ See [Upstash Vector documentation](https://upstash.com/docs/vector/features/embeddingmodels)
62
+ for more detail on embedding models.
63
+
64
+ ## Namespaces
65
+ You can use namespaces to partition your data in the index. Namespaces are useful when you want to query over huge amount of data, and you want to partition the data to make the queries faster. When you use namespaces, there won't be post-filtering on the results which will make the query results more precise.
66
+
67
+ ```python
68
+ from langchain_community.vectorstores.upstash import UpstashVectorStore
69
+ import os
70
+
71
+ os.environ["UPSTASH_VECTOR_REST_URL"] = "<UPSTASH_VECTOR_REST_URL>"
72
+ os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "<UPSTASH_VECTOR_REST_TOKEN>"
73
+
74
+ store = UpstashVectorStore(
75
+ embedding=embeddings
76
+ namespace="my_namespace"
77
+ )
78
+ ```
79
+
80
+ ### Inserting Vectors
81
+
82
+ ```python
83
+ from langchain.text_splitter import CharacterTextSplitter
84
+ from langchain_community.document_loaders import TextLoader
85
+ from langchain_openai import OpenAIEmbeddings
86
+
87
+ loader = TextLoader("../../modules/state_of_the_union.txt")
88
+ documents = loader.load()
89
+ text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
90
+ docs = text_splitter.split_documents(documents)
91
+
92
+ # Create a new embeddings object
93
+ embeddings = OpenAIEmbeddings()
94
+
95
+ # Create a new UpstashVectorStore object
96
+ store = UpstashVectorStore(
97
+ embedding=embeddings
98
+ )
99
+
100
+ # Insert the document embeddings into the store
101
+ store.add_documents(docs)
102
+ ```
103
+
104
+ When inserting documents, first they are embedded using the `Embeddings` object.
105
+
106
+ Most embedding models can embed multiple documents at once, so the documents are batched and embedded in parallel.
107
+ The size of the batch can be controlled using the `embedding_chunk_size` parameter.
108
+
109
+ The embedded vectors are then stored in the Upstash Vector database. When they are sent, multiple vectors are batched together to reduce the number of HTTP requests.
110
+ The size of the batch can be controlled using the `batch_size` parameter. Upstash Vector has a limit of 1000 vectors per batch in the free tier.
111
+
112
+ ```python
113
+ store.add_documents(
114
+ documents,
115
+ batch_size=100,
116
+ embedding_chunk_size=200
117
+ )
118
+ ```
119
+
120
+ ### Querying Vectors
121
+
122
+ Vectors can be queried using a text query or another vector.
123
+
124
+ The returned value is a list of Document objects.
125
+
126
+ ```python
127
+ result = store.similarity_search(
128
+ "The United States of America",
129
+ k=5
130
+ )
131
+ ```
132
+
133
+ Or using a vector:
134
+
135
+ ```python
136
+ vector = embeddings.embed_query("Hello world")
137
+
138
+ result = store.similarity_search_by_vector(
139
+ vector,
140
+ k=5
141
+ )
142
+ ```
143
+
144
+ When searching, you can also utilize the `filter` parameter which will allow you to filter by metadata:
145
+
146
+ ```python
147
+ result = store.similarity_search(
148
+ "The United States of America",
149
+ k=5,
150
+ filter="type = 'country'"
151
+ )
152
+ ```
153
+
154
+ See [Upstash Vector documentation](https://upstash.com/docs/vector/features/filtering)
155
+ for more details on metadata filtering.
156
+
157
+ ### Deleting Vectors
158
+
159
+ Vectors can be deleted by their IDs.
160
+
161
+ ```python
162
+ store.delete(["id1", "id2"])
163
+ ```
164
+
165
+ ### Getting information about the store
166
+
167
+ You can get information about your database like the distance metric dimension using the info function.
168
+
169
+ When an insert happens, the database an indexing takes place. While this is happening new vectors can not be queried. `pendingVectorCount` represents the number of vector that are currently being indexed.
170
+
171
+ ```python
172
+ info = store.info()
173
+ print(info)
174
+
175
+ # Output:
176
+ # {'vectorCount': 44, 'pendingVectorCount': 0, 'indexSize': 2642412, 'dimension': 1536, 'similarityFunction': 'COSINE'}
177
+ ```
178
+
179
+ # Upstash Redis
180
+
181
+ This page covers how to use [Upstash Redis](https://upstash.com/redis) with LangChain.
182
+
183
+ ## Installation and Setup
184
+ - Upstash Redis Python SDK can be installed with `pip install upstash-redis`
185
+ - A globally distributed, low-latency and highly available database can be created at the [Upstash Console](https://console.upstash.com)
186
+
187
+
188
+ ## Integrations
189
+ All of Upstash-LangChain integrations are based on `upstash-redis` Python SDK being utilized as wrappers for LangChain.
190
+ This SDK utilizes Upstash Redis DB by giving UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN parameters from the console.
191
+
192
+
193
+ ### Cache
194
+
195
+ [Upstash Redis](https://upstash.com/redis) can be used as a cache for LLM prompts and responses.
196
+
197
+ To import this cache:
198
+ ```python
199
+ from langchain.cache import UpstashRedisCache
200
+ ```
201
+
202
+ To use with your LLMs:
203
+ ```python
204
+ import langchain
205
+ from upstash_redis import Redis
206
+
207
+ URL = "<UPSTASH_REDIS_REST_URL>"
208
+ TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"
209
+
210
+ langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))
211
+ ```
212
+
213
+ ### Memory
214
+
215
+ See a [usage example](/docs/integrations/memory/upstash_redis_chat_message_history).
216
+
217
+ ```python
218
+ from langchain_community.chat_message_histories import (
219
+ UpstashRedisChatMessageHistory,
220
+ )
221
+ ```
langchain_md_files/integrations/providers/usearch.mdx ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # USearch
2
+ >[USearch](https://unum-cloud.github.io/usearch/) is a Smaller & Faster Single-File Vector Search Engine.
3
+
4
+ >`USearch's` base functionality is identical to `FAISS`, and the interface should look
5
+ > familiar if you have ever investigated Approximate Nearest Neighbors search.
6
+ > `USearch` and `FAISS` both employ `HNSW` algorithm, but they differ significantly
7
+ > in their design principles. `USearch` is compact and broadly compatible with FAISS without
8
+ > sacrificing performance, with a primary focus on user-defined metrics and fewer dependencies.
9
+ >
10
+ ## Installation and Setup
11
+
12
+ We need to install `usearch` python package.
13
+
14
+ ```bash
15
+ pip install usearch
16
+ ```
17
+
18
+ ## Vector Store
19
+
20
+ See a [usage example](/docs/integrations/vectorstores/usearch).
21
+
22
+ ```python
23
+ from langchain_community.vectorstores import USearch
24
+ ```
25
+
langchain_md_files/integrations/providers/vdms.mdx ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VDMS
2
+
3
+ > [VDMS](https://github.com/IntelLabs/vdms/blob/master/README.md) is a storage solution for efficient access
4
+ > of big-”visual”-data that aims to achieve cloud scale by searching for relevant visual data via visual metadata
5
+ > stored as a graph and enabling machine friendly enhancements to visual data for faster access.
6
+
7
+ ## Installation and Setup
8
+
9
+ ### Install Client
10
+
11
+ ```bash
12
+ pip install vdms
13
+ ```
14
+
15
+ ### Install Database
16
+
17
+ There are two ways to get started with VDMS:
18
+
19
+ #### Install VDMS on your local machine via docker
20
+ ```bash
21
+ docker run -d -p 55555:55555 intellabs/vdms:latest
22
+ ```
23
+
24
+ #### Install VDMS directly on your local machine
25
+ Please see [installation instructions](https://github.com/IntelLabs/vdms/blob/master/INSTALL.md).
26
+
27
+
28
+
29
+ ## VectorStore
30
+
31
+ The vector store is a simple wrapper around VDMS. It provides a simple interface to store and retrieve data.
32
+
33
+ ```python
34
+ from langchain_community.document_loaders import TextLoader
35
+ from langchain.text_splitter import CharacterTextSplitter
36
+
37
+ loader = TextLoader("./state_of_the_union.txt")
38
+ documents = loader.load()
39
+ text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
40
+ docs = text_splitter.split_documents(documents)
41
+
42
+ from langchain_community.vectorstores import VDMS
43
+ from langchain_community.vectorstores.vdms import VDMS_Client
44
+ from langchain_huggingface import HuggingFaceEmbeddings
45
+
46
+ client = VDMS_Client("localhost", 55555)
47
+ vectorstore = VDMS.from_documents(
48
+ docs,
49
+ client=client,
50
+ collection_name="langchain-demo",
51
+ embedding_function=HuggingFaceEmbeddings(),
52
+ engine="FaissFlat"
53
+ distance_strategy="L2",
54
+ )
55
+
56
+ query = "What did the president say about Ketanji Brown Jackson"
57
+ results = vectorstore.similarity_search(query)
58
+ ```
59
+
60
+ For a more detailed walkthrough of the VDMS wrapper, see [this notebook](/docs/integrations/vectorstores/vdms)
61
+
62
+
langchain_md_files/integrations/providers/vectara/index.mdx ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Vectara
2
+
3
+ >[Vectara](https://vectara.com/) provides a Trusted Generative AI platform, allowing organizations to rapidly create a ChatGPT-like experience (an AI assistant)
4
+ > which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service).
5
+
6
+ **Vectara Overview:**
7
+ `Vectara` is RAG-as-a-service, providing all the components of RAG behind an easy-to-use API, including:
8
+ 1. A way to extract text from files (PDF, PPT, DOCX, etc)
9
+ 2. ML-based chunking that provides state of the art performance.
10
+ 3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.
11
+ 4. Its own internal vector database where text chunks and embedding vectors are stored.
12
+ 5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments
13
+ (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and
14
+ [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))
15
+ 7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.
16
+
17
+ For more information:
18
+ - [Documentation](https://docs.vectara.com/docs/)
19
+ - [API Playground](https://docs.vectara.com/docs/rest-api/)
20
+ - [Quickstart](https://docs.vectara.com/docs/quickstart)
21
+
22
+ ## Installation and Setup
23
+
24
+ To use `Vectara` with LangChain no special installation steps are required.
25
+ To get started, [sign up](https://vectara.com/integrations/langchain) for a free Vectara account (if you don't already have one),
26
+ and follow the [quickstart](https://docs.vectara.com/docs/quickstart) guide to create a corpus and an API key.
27
+ Once you have these, you can provide them as arguments to the Vectara `vectorstore`, or you can set them as environment variables.
28
+
29
+ - export `VECTARA_CUSTOMER_ID`="your_customer_id"
30
+ - export `VECTARA_CORPUS_ID`="your_corpus_id"
31
+ - export `VECTARA_API_KEY`="your-vectara-api-key"
32
+
33
+ ## Vectara as a Vector Store
34
+
35
+ There exists a wrapper around the Vectara platform, allowing you to use it as a `vectorstore` in LangChain:
36
+
37
+ To import this vectorstore:
38
+ ```python
39
+ from langchain_community.vectorstores import Vectara
40
+ ```
41
+
42
+ To create an instance of the Vectara vectorstore:
43
+ ```python
44
+ vectara = Vectara(
45
+ vectara_customer_id=customer_id,
46
+ vectara_corpus_id=corpus_id,
47
+ vectara_api_key=api_key
48
+ )
49
+ ```
50
+ The `customer_id`, `corpus_id` and `api_key` are optional, and if they are not supplied will be read from
51
+ the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`, respectively.
52
+
53
+ ### Adding Texts or Files
54
+
55
+ After you have the vectorstore, you can `add_texts` or `add_documents` as per the standard `VectorStore` interface, for example:
56
+
57
+ ```python
58
+ vectara.add_texts(["to be or not to be", "that is the question"])
59
+ ```
60
+
61
+ Since Vectara supports file-upload in the platform, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly.
62
+ When using this method, each file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.
63
+
64
+ As an example:
65
+
66
+ ```python
67
+ vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...])
68
+ ```
69
+
70
+ Of course you do not have to add any data, and instead just connect to an existing Vectara corpus where data may already be indexed.
71
+
72
+ ### Querying the VectorStore
73
+
74
+ To query the Vectara vectorstore, you can use the `similarity_search` method (or `similarity_search_with_score`), which takes a query string and returns a list of results:
75
+ ```python
76
+ results = vectara.similarity_search_with_score("what is LangChain?")
77
+ ```
78
+ The results are returned as a list of relevant documents, and a relevance score of each document.
79
+
80
+ In this case, we used the default retrieval parameters, but you can also specify the following additional arguments in `similarity_search` or `similarity_search_with_score`:
81
+ - `k`: number of results to return (defaults to 5)
82
+ - `lambda_val`: the [lexical matching](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) factor for hybrid search (defaults to 0.025)
83
+ - `filter`: a [filter](https://docs.vectara.com/docs/common-use-cases/filtering-by-metadata/filter-overview) to apply to the results (default None)
84
+ - `n_sentence_context`: number of sentences to include before/after the actual matching segment when returning results. This defaults to 2.
85
+ - `rerank_config`: can be used to specify reranker for thr results
86
+ - `reranker`: mmr, rerank_multilingual_v1 or none. Note that "rerank_multilingual_v1" is a Scale only feature
87
+ - `rerank_k`: number of results to use for reranking
88
+ - `mmr_diversity_bias`: 0 = no diversity, 1 = full diversity. This is the lambda parameter in the MMR formula and is in the range 0...1
89
+
90
+ To get results without the relevance score, you can simply use the 'similarity_search' method:
91
+ ```python
92
+ results = vectara.similarity_search("what is LangChain?")
93
+ ```
94
+
95
+ ## Vectara for Retrieval Augmented Generation (RAG)
96
+
97
+ Vectara provides a full RAG pipeline, including generative summarization. To use it as a complete RAG solution, you can use the `as_rag` method.
98
+ There are a few additional parameters that can be specified in the `VectaraQueryConfig` object to control retrieval and summarization:
99
+ * k: number of results to return
100
+ * lambda_val: the lexical matching factor for hybrid search
101
+ * summary_config (optional): can be used to request an LLM summary in RAG
102
+ - is_enabled: True or False
103
+ - max_results: number of results to use for summary generation
104
+ - response_lang: language of the response summary, in ISO 639-2 format (e.g. 'en', 'fr', 'de', etc)
105
+ * rerank_config (optional): can be used to specify Vectara Reranker of the results
106
+ - reranker: mmr, rerank_multilingual_v1 or none
107
+ - rerank_k: number of results to use for reranking
108
+ - mmr_diversity_bias: 0 = no diversity, 1 = full diversity.
109
+ This is the lambda parameter in the MMR formula and is in the range 0...1
110
+
111
+ For example:
112
+
113
+ ```python
114
+ summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng')
115
+ rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2)
116
+ config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config)
117
+ ```
118
+ Then you can use the `as_rag` method to create a RAG pipeline:
119
+
120
+ ```python
121
+ query_str = "what did Biden say?"
122
+
123
+ rag = vectara.as_rag(config)
124
+ rag.invoke(query_str)['answer']
125
+ ```
126
+
127
+ The `as_rag` method returns a `VectaraRAG` object, which behaves just like any LangChain Runnable, including the `invoke` or `stream` methods.
128
+
129
+ ## Vectara Chat
130
+
131
+ The RAG functionality can be used to create a chatbot. For example, you can create a simple chatbot that responds to user input:
132
+
133
+ ```python
134
+ summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng')
135
+ rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2)
136
+ config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config)
137
+
138
+ query_str = "what did Biden say?"
139
+ bot = vectara.as_chat(config)
140
+ bot.invoke(query_str)['answer']
141
+ ```
142
+
143
+ The main difference is the following: with `as_chat` Vectara internally tracks the chat history and conditions each response on the full chat history.
144
+ There is no need to keep that history locally to LangChain, as Vectara will manage it internally.
145
+
146
+ ## Vectara as a LangChain retriever only
147
+
148
+ If you want to use Vectara as a retriever only, you can use the `as_retriever` method, which returns a `VectaraRetriever` object.
149
+ ```python
150
+ retriever = vectara.as_retriever(config=config)
151
+ retriever.invoke(query_str)
152
+ ```
153
+
154
+ Like with as_rag, you provide a `VectaraQueryConfig` object to control the retrieval parameters.
155
+ In most cases you would not enable the summary_config, but it is left as an option for backwards compatibility.
156
+ If no summary is requested, the response will be a list of relevant documents, each with a relevance score.
157
+ If a summary is requested, the response will be a list of relevant documents as before, plus an additional document that includes the generative summary.
158
+
159
+ ## Hallucination Detection score
160
+
161
+ Vectara created [HHEM](https://huggingface.co/vectara/hallucination_evaluation_model) - an open source model that can be used to evaluate RAG responses for factual consistency.
162
+ As part of the Vectara RAG, the "Factual Consistency Score" (or FCS), which is an improved version of the open source HHEM is made available via the API.
163
+ This is automatically included in the output of the RAG pipeline
164
+
165
+ ```python
166
+ summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng')
167
+ rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2)
168
+ config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config)
169
+
170
+ rag = vectara.as_rag(config)
171
+ resp = rag.invoke(query_str)
172
+ print(resp['answer'])
173
+ print(f"Vectara FCS = {resp['fcs']}")
174
+ ```
175
+
176
+ ## Example Notebooks
177
+
178
+ For a more detailed examples of using Vectara with LangChain, see the following example notebooks:
179
+ * [this notebook](/docs/integrations/vectorstores/vectara) shows how to use Vectara: with full RAG or just as a retriever.
180
+ * [this notebook](/docs/integrations/retrievers/self_query/vectara_self_query) shows the self-query capability with Vectara.
181
+ * [this notebook](/docs/integrations/providers/vectara/vectara_chat) shows how to build a chatbot with Langchain and Vectara
182
+
langchain_md_files/integrations/providers/vespa.mdx ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Vespa
2
+
3
+ >[Vespa](https://vespa.ai/) is a fully featured search engine and vector database.
4
+ > It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
5
+
6
+ ## Installation and Setup
7
+
8
+
9
+ ```bash
10
+ pip install pyvespa
11
+ ```
12
+
13
+
14
+
15
+ ## Retriever
16
+
17
+ See a [usage example](/docs/integrations/retrievers/vespa).
18
+
19
+ ```python
20
+ from langchain.retrievers import VespaRetriever
21
+ ```
langchain_md_files/integrations/providers/vlite.mdx ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # vlite
2
+
3
+ This page covers how to use [vlite](https://github.com/sdan/vlite) within LangChain. vlite is a simple and fast vector database for storing and retrieving embeddings.
4
+
5
+ ## Installation and Setup
6
+
7
+ To install vlite, run the following command:
8
+
9
+ ```bash
10
+ pip install vlite
11
+ ```
12
+
13
+ For PDF OCR support, install the `vlite[ocr]` extra:
14
+
15
+ ```bash
16
+ pip install vlite[ocr]
17
+ ```
18
+
19
+ ## VectorStore
20
+
21
+ vlite provides a wrapper around its vector database, allowing you to use it as a vectorstore for semantic search and example selection.
22
+
23
+ To import the vlite vectorstore:
24
+
25
+ ```python
26
+ from langchain_community.vectorstores import vlite
27
+ ```
28
+
29
+ ### Usage
30
+
31
+ For a more detailed walkthrough of the vlite wrapper, see [this notebook](/docs/integrations/vectorstores/vlite).
langchain_md_files/integrations/providers/voyageai.mdx ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VoyageAI
2
+
3
+ All functionality related to VoyageAI
4
+
5
+ >[VoyageAI](https://www.voyageai.com/) Voyage AI builds embedding models, customized for your domain and company, for better retrieval quality.
6
+
7
+ ## Installation and Setup
8
+
9
+ Install the integration package with
10
+ ```bash
11
+ pip install langchain-voyageai
12
+ ```
13
+
14
+ Get a VoyageAI API key and set it as an environment variable (`VOYAGE_API_KEY`)
15
+
16
+
17
+ ## Text Embedding Model
18
+
19
+ See a [usage example](/docs/integrations/text_embedding/voyageai)
20
+
21
+ ```python
22
+ from langchain_voyageai import VoyageAIEmbeddings
23
+ ```
24
+
25
+
26
+ ## Reranking
27
+
28
+ See a [usage example](/docs/integrations/document_transformers/voyageai-reranker)
29
+
30
+ ```python
31
+ from langchain_voyageai import VoyageAIRerank
32
+ ```
langchain_md_files/integrations/providers/weather.mdx ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Weather
2
+
3
+ >[OpenWeatherMap](https://openweathermap.org/) is an open-source weather service provider.
4
+
5
+
6
+
7
+ ## Installation and Setup
8
+
9
+ ```bash
10
+ pip install pyowm
11
+ ```
12
+
13
+ We must set up the `OpenWeatherMap API token`.
14
+
15
+ ## Document Loader
16
+
17
+ See a [usage example](/docs/integrations/document_loaders/weather).
18
+
19
+ ```python
20
+ from langchain_community.document_loaders import WeatherDataLoader
21
+ ```
langchain_md_files/integrations/providers/weaviate.mdx ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Weaviate
2
+
3
+ >[Weaviate](https://weaviate.io/) is an open-source vector database. It allows you to store data objects and vector embeddings from
4
+ >your favorite ML models, and scale seamlessly into billions of data objects.
5
+
6
+
7
+ What is `Weaviate`?
8
+ - Weaviate is an open-source ​database of the type ​vector search engine.
9
+ - Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
10
+ - Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
11
+ - Weaviate has a GraphQL-API to access your data easily.
12
+ - We aim to bring your vector search set up to production to query in mere milliseconds (check our [open-source benchmarks](https://weaviate.io/developers/weaviate/current/benchmarks/) to see if Weaviate fits your use case).
13
+ - Get to know Weaviate in the [basics getting started guide](https://weaviate.io/developers/weaviate/current/core-knowledge/basics.html) in under five minutes.
14
+
15
+ **Weaviate in detail:**
16
+
17
+ `Weaviate` is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
18
+
19
+ ## Installation and Setup
20
+
21
+ Install the Python SDK:
22
+
23
+ ```bash
24
+ pip install langchain-weaviate
25
+ ```
26
+
27
+
28
+ ## Vector Store
29
+
30
+ There exists a wrapper around `Weaviate` indexes, allowing you to use it as a vectorstore,
31
+ whether for semantic search or example selection.
32
+
33
+ To import this vectorstore:
34
+ ```python
35
+ from langchain_weaviate import WeaviateVectorStore
36
+ ```
37
+
38
+ For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](/docs/integrations/vectorstores/weaviate)
langchain_md_files/integrations/providers/whatsapp.mdx ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WhatsApp
2
+
3
+ >[WhatsApp](https://www.whatsapp.com/) (also called `WhatsApp Messenger`) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.
4
+
5
+
6
+ ## Installation and Setup
7
+
8
+ There isn't any special setup for it.
9
+
10
+
11
+
12
+ ## Document Loader
13
+
14
+ See a [usage example](/docs/integrations/document_loaders/whatsapp_chat).
15
+
16
+ ```python
17
+ from langchain_community.document_loaders import WhatsAppChatLoader
18
+ ```
langchain_md_files/integrations/providers/wikipedia.mdx ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Wikipedia
2
+
3
+ >[Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history.
4
+
5
+
6
+ ## Installation and Setup
7
+
8
+ ```bash
9
+ pip install wikipedia
10
+ ```
11
+
12
+
13
+
14
+ ## Document Loader
15
+
16
+ See a [usage example](/docs/integrations/document_loaders/wikipedia).
17
+
18
+ ```python
19
+ from langchain_community.document_loaders import WikipediaLoader
20
+ ```
21
+
22
+ ## Retriever
23
+
24
+ See a [usage example](/docs/integrations/retrievers/wikipedia).
25
+
26
+ ```python
27
+ from langchain.retrievers import WikipediaRetriever
28
+ ```
langchain_md_files/integrations/providers/wolfram_alpha.mdx ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Wolfram Alpha
2
+
3
+ >[WolframAlpha](https://en.wikipedia.org/wiki/WolframAlpha) is an answer engine developed by `Wolfram Research`.
4
+ > It answers factual queries by computing answers from externally sourced data.
5
+
6
+ This page covers how to use the `Wolfram Alpha API` within LangChain.
7
+
8
+ ## Installation and Setup
9
+ - Install requirements with
10
+ ```bash
11
+ pip install wolframalpha
12
+ ```
13
+ - Go to wolfram alpha and sign up for a developer account [here](https://developer.wolframalpha.com/)
14
+ - Create an app and get your `APP ID`
15
+ - Set your APP ID as an environment variable `WOLFRAM_ALPHA_APPID`
16
+
17
+
18
+ ## Wrappers
19
+
20
+ ### Utility
21
+
22
+ There exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility:
23
+
24
+ ```python
25
+ from langchain_community.utilities.wolfram_alpha import WolframAlphaAPIWrapper
26
+ ```
27
+
28
+ For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/wolfram_alpha).
29
+
30
+ ### Tool
31
+
32
+ You can also easily load this wrapper as a Tool (to use with an Agent).
33
+ You can do this with:
34
+ ```python
35
+ from langchain.agents import load_tools
36
+ tools = load_tools(["wolfram-alpha"])
37
+ ```
38
+
39
+ For more information on tools, see [this page](/docs/how_to/tools_builtin).
langchain_md_files/integrations/providers/writer.mdx ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Writer
2
+
3
+ This page covers how to use the Writer ecosystem within LangChain.
4
+ It is broken into two parts: installation and setup, and then references to specific Writer wrappers.
5
+
6
+ ## Installation and Setup
7
+ - Get an Writer api key and set it as an environment variable (`WRITER_API_KEY`)
8
+
9
+ ## Wrappers
10
+
11
+ ### LLM
12
+
13
+ There exists an Writer LLM wrapper, which you can access with
14
+ ```python
15
+ from langchain_community.llms import Writer
16
+ ```
langchain_md_files/integrations/providers/xata.mdx ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Xata
2
+
3
+ > [Xata](https://xata.io) is a serverless data platform, based on `PostgreSQL`.
4
+ > It provides a Python SDK for interacting with your database, and a UI
5
+ > for managing your data.
6
+ > `Xata` has a native vector type, which can be added to any table, and
7
+ > supports similarity search. LangChain inserts vectors directly to `Xata`,
8
+ > and queries it for the nearest neighbors of a given vector, so that you can
9
+ > use all the LangChain Embeddings integrations with `Xata`.
10
+
11
+
12
+ ## Installation and Setup
13
+
14
+
15
+ We need to install `xata` python package.
16
+
17
+ ```bash
18
+ pip install xata==1.0.0a7
19
+ ```
20
+
21
+ ## Vector Store
22
+
23
+ See a [usage example](/docs/integrations/vectorstores/xata).
24
+
25
+ ```python
26
+ from langchain_community.vectorstores import XataVectorStore
27
+ ```
28
+
29
+ ## Memory
30
+
31
+ See a [usage example](/docs/integrations/memory/xata_chat_message_history).
32
+
33
+ ```python
34
+ from langchain_community.chat_message_histories import XataChatMessageHistory
35
+ ```
36
+
langchain_md_files/integrations/providers/xinference.mdx ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Xorbits Inference (Xinference)
2
+
3
+ This page demonstrates how to use [Xinference](https://github.com/xorbitsai/inference)
4
+ with LangChain.
5
+
6
+ `Xinference` is a powerful and versatile library designed to serve LLMs,
7
+ speech recognition models, and multimodal models, even on your laptop.
8
+ With Xorbits Inference, you can effortlessly deploy and serve your or
9
+ state-of-the-art built-in models using just a single command.
10
+
11
+ ## Installation and Setup
12
+
13
+ Xinference can be installed via pip from PyPI:
14
+
15
+ ```bash
16
+ pip install "xinference[all]"
17
+ ```
18
+
19
+ ## LLM
20
+
21
+ Xinference supports various models compatible with GGML, including chatglm, baichuan, whisper,
22
+ vicuna, and orca. To view the builtin models, run the command:
23
+
24
+ ```bash
25
+ xinference list --all
26
+ ```
27
+
28
+
29
+ ### Wrapper for Xinference
30
+
31
+ You can start a local instance of Xinference by running:
32
+
33
+ ```bash
34
+ xinference
35
+ ```
36
+
37
+ You can also deploy Xinference in a distributed cluster. To do so, first start an Xinference supervisor
38
+ on the server you want to run it:
39
+
40
+ ```bash
41
+ xinference-supervisor -H "${supervisor_host}"
42
+ ```
43
+
44
+
45
+ Then, start the Xinference workers on each of the other servers where you want to run them on:
46
+
47
+ ```bash
48
+ xinference-worker -e "http://${supervisor_host}:9997"
49
+ ```
50
+
51
+ You can also start a local instance of Xinference by running:
52
+
53
+ ```bash
54
+ xinference
55
+ ```
56
+
57
+ Once Xinference is running, an endpoint will be accessible for model management via CLI or
58
+ Xinference client.
59
+
60
+ For local deployment, the endpoint will be http://localhost:9997.
61
+
62
+
63
+ For cluster deployment, the endpoint will be http://${supervisor_host}:9997.
64
+
65
+
66
+ Then, you need to launch a model. You can specify the model names and other attributes
67
+ including model_size_in_billions and quantization. You can use command line interface (CLI) to
68
+ do it. For example,
69
+
70
+ ```bash
71
+ xinference launch -n orca -s 3 -q q4_0
72
+ ```
73
+
74
+ A model uid will be returned.
75
+
76
+ Example usage:
77
+
78
+ ```python
79
+ from langchain_community.llms import Xinference
80
+
81
+ llm = Xinference(
82
+ server_url="http://0.0.0.0:9997",
83
+ model_uid = {model_uid} # replace model_uid with the model UID return from launching the model
84
+ )
85
+
86
+ llm(
87
+ prompt="Q: where can we visit in the capital of France? A:",
88
+ generate_config={"max_tokens": 1024, "stream": True},
89
+ )
90
+
91
+ ```
92
+
93
+ ### Usage
94
+
95
+ For more information and detailed examples, refer to the
96
+ [example for xinference LLMs](/docs/integrations/llms/xinference)
97
+
98
+ ### Embeddings
99
+
100
+ Xinference also supports embedding queries and documents. See
101
+ [example for xinference embeddings](/docs/integrations/text_embedding/xinference)
102
+ for a more detailed demo.
langchain_md_files/integrations/providers/yandex.mdx ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Yandex
2
+
3
+ All functionality related to Yandex Cloud
4
+
5
+ >[Yandex Cloud](https://cloud.yandex.com/en/) is a public cloud platform.
6
+
7
+ ## Installation and Setup
8
+
9
+ Yandex Cloud SDK can be installed via pip from PyPI:
10
+
11
+ ```bash
12
+ pip install yandexcloud
13
+ ```
14
+
15
+ ## LLMs
16
+
17
+ ### YandexGPT
18
+
19
+ See a [usage example](/docs/integrations/llms/yandex).
20
+
21
+ ```python
22
+ from langchain_community.llms import YandexGPT
23
+ ```
24
+
25
+ ## Chat models
26
+
27
+ ### YandexGPT
28
+
29
+ See a [usage example](/docs/integrations/chat/yandex).
30
+
31
+ ```python
32
+ from langchain_community.chat_models import ChatYandexGPT
33
+ ```
langchain_md_files/integrations/providers/yeagerai.mdx ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Yeager.ai
2
+
3
+ This page covers how to use [Yeager.ai](https://yeager.ai) to generate LangChain tools and agents.
4
+
5
+ ## What is Yeager.ai?
6
+ Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools.
7
+
8
+ It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.
9
+
10
+ ## yAgents
11
+ Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease.
12
+
13
+ ### How to use?
14
+ ```
15
+ pip install yeagerai-agent
16
+ yeagerai-agent
17
+ ```
18
+ Go to http://127.0.0.1:7860
19
+
20
+ This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab "Settings".
21
+
22
+ `OPENAI_API_KEY=<your_openai_api_key_here>`
23
+
24
+ We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.
25
+
26
+ ### Creating and Executing Tools with yAgents
27
+ yAgents makes it easy to create and execute AI-powered tools. Here's a brief overview of the process:
28
+ 1. Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool's purpose and functionality. For example:
29
+ `create a tool that returns the n-th prime number`
30
+
31
+ 2. Load the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
32
+ `load the tool that you just created it into your toolkit`
33
+
34
+ 3. Execute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
35
+ `generate the 50th prime number`
36
+
37
+ You can see a video of how it works [here](https://www.youtube.com/watch?v=KA5hCM3RaWE).
38
+
39
+ As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.
40
+
41
+ For more information, see [yAgents' Github](https://github.com/yeagerai/yeagerai-agent) or our [docs](https://yeagerai.gitbook.io/docs/general/welcome-to-yeager.ai)
42
+
43
+
langchain_md_files/integrations/providers/yi.mdx ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 01.AI
2
+
3
+ >[01.AI](https://www.lingyiwanwu.com/en), founded by Dr. Kai-Fu Lee, is a global company at the forefront of AI 2.0. They offer cutting-edge large language models, including the Yi series, which range from 6B to hundreds of billions of parameters. 01.AI also provides multimodal models, an open API platform, and open-source options like Yi-34B/9B/6B and Yi-VL.
4
+
5
+ ## Installation and Setup
6
+
7
+ Register and get an API key from either the China site [here](https://platform.lingyiwanwu.com/apikeys) or the global site [here](https://platform.01.ai/apikeys).
8
+
9
+ ## LLMs
10
+
11
+ See a [usage example](/docs/integrations/llms/yi).
12
+
13
+ ```python
14
+ from langchain_community.llms import YiLLM
15
+ ```
16
+
17
+ ## Chat models
18
+
19
+ See a [usage example](/docs/integrations/chat/yi).
20
+
21
+ ```python
22
+ from langchain_community.chat_models import ChatYi
23
+ ```
langchain_md_files/integrations/providers/youtube.mdx ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YouTube
2
+
3
+ >[YouTube](https://www.youtube.com/) is an online video sharing and social media platform by Google.
4
+ > We download the `YouTube` transcripts and video information.
5
+
6
+ ## Installation and Setup
7
+
8
+ ```bash
9
+ pip install youtube-transcript-api
10
+ pip install pytube
11
+ ```
12
+ See a [usage example](/docs/integrations/document_loaders/youtube_transcript).
13
+
14
+
15
+ ## Document Loader
16
+
17
+ See a [usage example](/docs/integrations/document_loaders/youtube_transcript).
18
+
19
+ ```python
20
+ from langchain_community.document_loaders import YoutubeLoader
21
+ from langchain_community.document_loaders import GoogleApiYoutubeLoader
22
+ ```
langchain_md_files/integrations/providers/zep.mdx ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Zep
2
+ > Recall, understand, and extract data from chat histories. Power personalized AI experiences.
3
+
4
+ >[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.
5
+ > With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,
6
+ > while also reducing hallucinations, latency, and cost.
7
+
8
+ ## How Zep works
9
+
10
+ Zep persists and recalls chat histories, and automatically generates summaries and other artifacts from these chat histories.
11
+ It also embeds messages and summaries, enabling you to search Zep for relevant context from past conversations.
12
+ Zep does all of this asynchronously, ensuring these operations don't impact your user's chat experience.
13
+ Data is persisted to database, allowing you to scale out when growth demands.
14
+
15
+ Zep also provides a simple, easy to use abstraction for document vector search called Document Collections.
16
+ This is designed to complement Zep's core memory features, but is not designed to be a general purpose vector database.
17
+
18
+ Zep allows you to be more intentional about constructing your prompt:
19
+ - automatically adding a few recent messages, with the number customized for your app;
20
+ - a summary of recent conversations prior to the messages above;
21
+ - and/or contextually relevant summaries or messages surfaced from the entire chat session.
22
+ - and/or relevant Business data from Zep Document Collections.
23
+
24
+ ## What is Zep Cloud?
25
+ [Zep Cloud](https://www.getzep.com) is a managed service with Zep Open Source at its core.
26
+ In addition to Zep Open Source's memory management features, Zep Cloud offers:
27
+ - **Fact Extraction**: Automatically build fact tables from conversations, without having to define a data schema upfront.
28
+ - **Dialog Classification**: Instantly and accurately classify chat dialog. Understand user intent and emotion, segment users, and more. Route chains based on semantic context, and trigger events.
29
+ - **Structured Data Extraction**: Quickly extract business data from chat conversations using a schema you define. Understand what your Assistant should ask for next in order to complete its task.
30
+
31
+
32
+
33
+ ## Zep Open Source
34
+ Zep offers an open source version with a self-hosted option.
35
+ Please refer to the [Zep Open Source](https://github.com/getzep/zep) repo for more information.
36
+ You can also find Zep Open Source compatible [Retriever](/docs/integrations/retrievers/zep_memorystore), [Vector Store](/docs/integrations/vectorstores/zep) and [Memory](/docs/integrations/memory/zep_memory) examples
37
+
38
+ ## Zep Cloud Installation and Setup
39
+
40
+ [Zep Cloud Docs](https://help.getzep.com)
41
+
42
+ 1. Install the Zep Cloud SDK:
43
+
44
+ ```bash
45
+ pip install zep_cloud
46
+ ```
47
+ or
48
+ ```bash
49
+ poetry add zep_cloud
50
+ ```
51
+
52
+ ## Memory
53
+
54
+ Zep's Memory API persists your users' chat history and metadata to a [Session](https://help.getzep.com/chat-history-memory/sessions), enriches the memory, and
55
+ enables vector similarity search over historical chat messages and dialog summaries.
56
+
57
+ Zep offers several approaches to populating prompts with context from historical conversations.
58
+
59
+ ### Perpetual Memory
60
+ This is the default memory type.
61
+ Salient facts from the dialog are extracted and stored in a Fact Table.
62
+ This is updated in real-time as new messages are added to the Session.
63
+ Every time you call the Memory API to get a Memory, Zep returns the Fact Table, the most recent messages (per your Message Window setting), and a summary of the most recent messages prior to the Message Window.
64
+ The combination of the Fact Table, summary, and the most recent messages in a prompts provides both factual context and nuance to the LLM.
65
+
66
+ ### Summary Retriever Memory
67
+ Returns the most recent messages and a summary of past messages relevant to the current conversation,
68
+ enabling you to provide your Assistant with helpful context from past conversations
69
+
70
+ ### Message Window Buffer Memory
71
+ Returns the most recent N messages from the current conversation.
72
+
73
+ Additionally, Zep enables vector similarity searches for Messages or Summaries stored within its system.
74
+
75
+ This feature lets you populate prompts with past conversations that are contextually similar to a specific query,
76
+ organizing the results by a similarity Score.
77
+
78
+ `ZepCloudChatMessageHistory` and `ZepCloudMemory` classes can be imported to interact with Zep Cloud APIs.
79
+
80
+ `ZepCloudChatMessageHistory` is compatible with `RunnableWithMessageHistory`.
81
+ ```python
82
+ from langchain_community.chat_message_histories import ZepCloudChatMessageHistory
83
+ ```
84
+
85
+ See a [Perpetual Memory Example here](/docs/integrations/memory/zep_cloud_chat_message_history).
86
+
87
+ You can use `ZepCloudMemory` together with agents that support Memory.
88
+ ```python
89
+ from langchain_community.memory import ZepCloudMemory
90
+ ```
91
+
92
+ See a [Memory RAG Example here](/docs/integrations/memory/zep_memory_cloud).
93
+
94
+ ## Retriever
95
+
96
+ Zep's Memory Retriever is a LangChain Retriever that enables you to retrieve messages from a Zep Session and use them to construct your prompt.
97
+
98
+ The Retriever supports searching over both individual messages and summaries of conversations. The latter is useful for providing rich, but succinct context to the LLM as to relevant past conversations.
99
+
100
+ Zep's Memory Retriever supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://help.getzep.com/working-with-search#how-zeps-mmr-re-ranking-works). MMR search is useful for ensuring that the retrieved messages are diverse and not too similar to each other
101
+
102
+ See a [usage example](/docs/integrations/retrievers/zep_cloud_memorystore).
103
+
104
+ ```python
105
+ from langchain_community.retrievers import ZepCloudRetriever
106
+ ```
107
+
108
+ ## Vector store
109
+
110
+ Zep's [Document VectorStore API](https://help.getzep.com/document-collections) enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand
111
+ distance functions, types of embeddings, or indexing best practices. You just pass in your chunked documents, and Zep handles the rest.
112
+
113
+ Zep supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://help.getzep.com/working-with-search#how-zeps-mmr-re-ranking-works).
114
+ MMR search is useful for ensuring that the retrieved documents are diverse and not too similar to each other.
115
+
116
+ ```python
117
+ from langchain_community.vectorstores import ZepCloudVectorStore
118
+ ```
119
+
120
+ See a [usage example](/docs/integrations/vectorstores/zep_cloud).
langchain_md_files/integrations/providers/zhipuai.mdx ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Zhipu AI
2
+
3
+ >[Zhipu AI](https://www.zhipuai.cn/en/aboutus), originating from the technological
4
+ > advancements of `Tsinghua University's Computer Science Department`,
5
+ > is an artificial intelligence company with the mission of teaching machines
6
+ > to think like humans. Its world-leading AI team has developed the cutting-edge
7
+ > large language and multimodal models and built the high-precision billion-scale
8
+ > knowledge graphs, the combination of which uniquely empowers us to create a powerful
9
+ > data- and knowledge-driven cognitive engine towards artificial general intelligence.
10
+
11
+
12
+ ## Chat models
13
+
14
+ See a [usage example](/docs/integrations/chat/zhipuai).
15
+
16
+ ```python
17
+ from langchain_community.chat_models import ChatZhipuAI
18
+ ```
langchain_md_files/integrations/providers/zilliz.mdx ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Zilliz
2
+
3
+ >[Zilliz Cloud](https://zilliz.com/doc/quick_start) is a fully managed service on cloud for `LF AI Milvus®`,
4
+
5
+
6
+ ## Installation and Setup
7
+
8
+ Install the Python SDK:
9
+ ```bash
10
+ pip install pymilvus
11
+ ```
12
+
13
+ ## Vectorstore
14
+
15
+ A wrapper around Zilliz indexes allows you to use it as a vectorstore,
16
+ whether for semantic search or example selection.
17
+
18
+ ```python
19
+ from langchain_community.vectorstores import Milvus
20
+ ```
21
+
22
+ For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/zilliz)
langchain_md_files/integrations/retrievers/index.mdx ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ ---
5
+
6
+ import {CategoryTable, IndexTable} from '@theme/FeatureTables'
7
+
8
+ # Retrievers
9
+
10
+ A [retriever](/docs/concepts/#retrievers) is an interface that returns documents given an unstructured query.
11
+ It is more general than a vector store.
12
+ A retriever does not need to be able to store documents, only to return (or retrieve) them.
13
+ Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
14
+
15
+ Retrievers accept a string query as input and return a list of [Documents](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html) as output.
16
+
17
+ For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).
18
+
19
+ Note that all [vector stores](/docs/concepts/#vector-stores) can be [cast to retrievers](/docs/how_to/vectorstore_retriever/).
20
+ Refer to the vector store [integration docs](/docs/integrations/vectorstores/) for available vector stores.
21
+ This page lists custom retrievers, implemented via subclassing [BaseRetriever](/docs/how_to/custom_retriever/).
22
+
23
+ ## Bring-your-own documents
24
+
25
+ The below retrievers allow you to index and search a custom corpus of documents.
26
+
27
+ <CategoryTable category="document_retrievers" />
28
+
29
+ ## External index
30
+
31
+ The below retrievers will search over an external index (e.g., constructed from Internet data or similar).
32
+
33
+ <CategoryTable category="external_retrievers" />
34
+
35
+ ## All retrievers
36
+
37
+ <IndexTable />
langchain_md_files/integrations/retrievers/self_query/index.mdx ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar-position: 0
3
+ ---
4
+
5
+ # Self-querying retrievers
6
+
7
+ Learn about how the self-querying retriever works [here](/docs/how_to/self_query).
8
+
9
+ import DocCardList from "@theme/DocCardList";
10
+
11
+ <DocCardList />
langchain_md_files/integrations/text_embedding/index.mdx ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ ---
5
+
6
+ # Embedding models
7
+
8
+ import { CategoryTable, IndexTable } from "@theme/FeatureTables";
9
+
10
+ [Embedding models](/docs/concepts#embedding-models) create a vector representation of a piece of text.
11
+
12
+ This page documents integrations with various model providers that allow you to use embeddings in LangChain.
13
+
14
+ <CategoryTable category="text_embedding" />
15
+
16
+ ## All embedding models
17
+
18
+ <IndexTable />
langchain_md_files/integrations/vectorstores/index.mdx ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ ---
5
+
6
+ # Vectorstores
7
+
8
+ import { CategoryTable, IndexTable } from "@theme/FeatureTables";
9
+
10
+ A [vector store](/docs/concepts/#vector-stores) stores [embedded](/docs/concepts/#embedding-models) data and performs similarity search.
11
+
12
+ <CategoryTable category="vectorstores" />
13
+
14
+ ## All Vectorstores
15
+
16
+ <IndexTable />
17
+
langchain_md_files/introduction.mdx ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ ---
5
+
6
+ # Introduction
7
+
8
+ **LangChain** is a framework for developing applications powered by large language models (LLMs).
9
+
10
+ LangChain simplifies every stage of the LLM application lifecycle:
11
+ - **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language-lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/platforms/).
12
+ Use [LangGraph](/docs/concepts/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support.
13
+ - **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
14
+ - **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).
15
+
16
+ import ThemedImage from '@theme/ThemedImage';
17
+ import useBaseUrl from '@docusaurus/useBaseUrl';
18
+
19
+ <ThemedImage
20
+ alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
21
+ sources={{
22
+ light: useBaseUrl('/svg/langchain_stack_062024.svg'),
23
+ dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
24
+ }}
25
+ style={{ width: "100%" }}
26
+ title="LangChain Framework Overview"
27
+ />
28
+
29
+ Concretely, the framework consists of the following open-source libraries:
30
+
31
+ - **`langchain-core`**: Base abstractions and LangChain Expression Language.
32
+ - **`langchain-community`**: Third party integrations.
33
+ - Partner packages (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`langchain-core`**.
34
+ - **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
35
+ - **[LangGraph](https://langchain-ai.github.io/langgraph)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
36
+ - **[LangServe](/docs/langserve)**: Deploy LangChain chains as REST APIs.
37
+ - **[LangSmith](https://docs.smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
38
+
39
+
40
+ :::note
41
+
42
+ These docs focus on the Python LangChain library. [Head here](https://js.langchain.com) for docs on the JavaScript LangChain library.
43
+
44
+ :::
45
+
46
+ ## [Tutorials](/docs/tutorials)
47
+
48
+ If you're looking to build something specific or are more of a hands-on learner, check out our [tutorials section](/docs/tutorials).
49
+ This is the best place to get started.
50
+
51
+ These are the best ones to get started with:
52
+
53
+ - [Build a Simple LLM Application](/docs/tutorials/llm_chain)
54
+ - [Build a Chatbot](/docs/tutorials/chatbot)
55
+ - [Build an Agent](/docs/tutorials/agents)
56
+ - [Introduction to LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
57
+
58
+ Explore the full list of LangChain tutorials [here](/docs/tutorials), and check out other [LangGraph tutorials here](https://langchain-ai.github.io/langgraph/tutorials/).
59
+
60
+
61
+ ## [How-to guides](/docs/how_to)
62
+
63
+ [Here](/docs/how_to) you’ll find short answers to “How do I….?” types of questions.
64
+ These how-to guides don’t cover topics in depth – you’ll find that material in the [Tutorials](/docs/tutorials) and the [API Reference](https://python.langchain.com/v0.2/api_reference/).
65
+ However, these guides will help you quickly accomplish common tasks.
66
+
67
+ Check out [LangGraph-specific how-tos here](https://langchain-ai.github.io/langgraph/how-tos/).
68
+
69
+ ## [Conceptual guide](/docs/concepts)
70
+
71
+ Introductions to all the key parts of LangChain you’ll need to know! [Here](/docs/concepts) you'll find high level explanations of all LangChain concepts.
72
+
73
+ For a deeper dive into LangGraph concepts, check out [this page](https://langchain-ai.github.io/langgraph/concepts/).
74
+
75
+ ## [API reference](https://python.langchain.com/v0.2/api_reference/)
76
+ Head to the reference section for full documentation of all classes and methods in the LangChain Python packages.
77
+
78
+ ## Ecosystem
79
+
80
+ ### [🦜🛠️ LangSmith](https://docs.smith.langchain.com)
81
+ Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
82
+
83
+ ### [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph)
84
+ Build stateful, multi-actor applications with LLMs. Integrates smoothly with LangChain, but can be used without it.
85
+
86
+ ## Additional resources
87
+
88
+ ### [Versions](/docs/versions/overview/)
89
+ See what changed in v0.2, learn how to migrate legacy code, and read up on our release/versioning policies, and more.
90
+
91
+ ### [Security](/docs/security)
92
+ Read up on [security](/docs/security) best practices to make sure you're developing safely with LangChain.
93
+
94
+ ### [Integrations](/docs/integrations/providers/)
95
+ LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).
96
+
97
+ ### [Contributing](/docs/contributing)
98
+ Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
langchain_md_files/people.mdx ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ hide_table_of_contents: true
3
+ ---
4
+
5
+ import People from "@theme/People";
6
+
7
+ # People
8
+
9
+ There are some incredible humans from all over the world who have been instrumental in helping the LangChain community flourish 🌐!
10
+
11
+ This page highlights a few of those folks who have dedicated their time to the open-source repo in the form of direct contributions and reviews.
12
+
13
+ ## Top reviewers
14
+
15
+ As LangChain has grown, the amount of surface area that maintainers cover has grown as well.
16
+
17
+ Thank you to the following folks who have gone above and beyond in reviewing incoming PRs 🙏!
18
+
19
+ <People type="top_reviewers"></People>
20
+
21
+ ## Top recent contributors
22
+
23
+ The list below contains contributors who have had the most PRs merged in the last three months, weighted (imperfectly) by impact.
24
+
25
+ Thank you all so much for your time and efforts in making LangChain better ❤️!
26
+
27
+ <People type="top_recent_contributors" count="20"></People>
28
+
29
+ ## Core maintainers
30
+
31
+ Hello there 👋!
32
+
33
+ We're LangChain's core maintainers. If you've spent time in the community, you've probably crossed paths
34
+ with at least one of us already.
35
+
36
+ <People type="maintainers"></People>
37
+
38
+ ## Top all-time contributors
39
+
40
+ And finally, this is an all-time list of all-stars who have made significant contributions to the framework 🌟:
41
+
42
+ <People type="top_contributors"></People>
43
+
44
+ We're so thankful for your support!
45
+
46
+ And one more thank you to [@tiangolo](https://github.com/tiangolo) for inspiration via FastAPI's [excellent people page](https://fastapi.tiangolo.com/fastapi-people).
langchain_md_files/tutorials/index.mdx ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_class_name: hidden
4
+ ---
5
+ # Tutorials
6
+
7
+ New to LangChain or to LLM app development in general? Read this material to quickly get up and running.
8
+
9
+ ## Basics
10
+ - [Build a Simple LLM Application with LCEL](/docs/tutorials/llm_chain)
11
+ - [Build a Chatbot](/docs/tutorials/chatbot)
12
+ - [Build vector stores and retrievers](/docs/tutorials/retrievers)
13
+ - [Build an Agent](/docs/tutorials/agents)
14
+
15
+ ## Working with external knowledge
16
+ - [Build a Retrieval Augmented Generation (RAG) Application](/docs/tutorials/rag)
17
+ - [Build a Conversational RAG Application](/docs/tutorials/qa_chat_history)
18
+ - [Build a Question/Answering system over SQL data](/docs/tutorials/sql_qa)
19
+ - [Build a Query Analysis System](/docs/tutorials/query_analysis)
20
+ - [Build a local RAG application](/docs/tutorials/local_rag)
21
+ - [Build a Question Answering application over a Graph Database](/docs/tutorials/graph)
22
+ - [Build a PDF ingestion and Question/Answering system](/docs/tutorials/pdf_qa/)
23
+
24
+ ## Specialized tasks
25
+ - [Build an Extraction Chain](/docs/tutorials/extraction)
26
+ - [Generate synthetic data](/docs/tutorials/data_generation)
27
+ - [Classify text into labels](/docs/tutorials/classification)
28
+ - [Summarize text](/docs/tutorials/summarization)
29
+
30
+ ## LangGraph
31
+
32
+ LangGraph is an extension of LangChain aimed at
33
+ building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
34
+
35
+ LangGraph documentation is currently hosted on a separate site.
36
+ You can peruse [LangGraph tutorials here](https://langchain-ai.github.io/langgraph/tutorials/).
37
+
38
+ ## LangSmith
39
+
40
+ LangSmith allows you to closely trace, monitor and evaluate your LLM application.
41
+ It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build.
42
+
43
+ LangSmith documentation is hosted on a separate site.
44
+ You can peruse [LangSmith tutorials here](https://docs.smith.langchain.com/tutorials/).
45
+
46
+ ### Evaluation
47
+
48
+ LangSmith helps you evaluate the performance of your LLM applications. The below tutorial is a great way to get started:
49
+
50
+ - [Evaluate your LLM application](https://docs.smith.langchain.com/tutorials/Developers/evaluation)
51
+
52
+ ## More
53
+
54
+ For more tutorials, see our [cookbook section](https://github.com/langchain-ai/langchain/tree/master/cookbook).
langchain_md_files/versions/overview.mdx ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 0
3
+ sidebar_label: Overview of v0.2
4
+ ---
5
+
6
+ # Overview of LangChain v0.2
7
+
8
+ ## What’s new in LangChain?
9
+
10
+ The following features have been added during the development of 0.1.x:
11
+
12
+ - Better streaming support via the [Event Streaming API](https://python.langchain.com/docs/expression_language/streaming/#using-stream-events).
13
+ - [Standardized tool calling support](https://blog.langchain.dev/tool-calling-with-langchain/)
14
+ - A standardized interface for [structuring output](https://github.com/langchain-ai/langchain/discussions/18154)
15
+ - [@chain decorator](https://python.langchain.com/docs/expression_language/how_to/decorator/) to more easily create **RunnableLambdas**
16
+ - https://python.langchain.com/docs/expression_language/how_to/inspect/
17
+ - In Python, better async support for many core abstractions (thank you [@cbornet](https://github.com/cbornet)!!)
18
+ - Include response metadata in `AIMessage` to make it easy to access raw output from the underlying models
19
+ - Tooling to visualize [your runnables](https://python.langchain.com/docs/expression_language/how_to/inspect/) or [your langgraph app](https://github.com/langchain-ai/langgraph/blob/main/examples/visualization.ipynb)
20
+ - Interoperability of chat message histories across most providers
21
+ - [Over 20+ partner packages in python](https://python.langchain.com/docs/integrations/platforms/) for popular integrations
22
+
23
+
24
+ ## What’s coming to LangChain?
25
+
26
+ - We’ve been working hard on [langgraph](https://langchain-ai.github.io/langgraph/). We will be building more capabilities on top of it and focusing on making it the go-to framework for agent architectures.
27
+ - Vectorstores V2! We’ll be revisiting our vectorstores abstractions to help improve usability and reliability.
28
+ - Better documentation and versioned docs!
29
+ - We’re planning a breaking release (0.3.0) sometime between July-September to [upgrade to full support of Pydantic 2](https://github.com/langchain-ai/langchain/discussions/19339), and will drop support for Pydantic 1 (including objects originating from the `v1` namespace of Pydantic 2).
30
+
31
+ ## What changed?
32
+
33
+ Due to the rapidly evolving field, LangChain has also evolved rapidly.
34
+
35
+ This document serves to outline at a high level what has changed and why.
36
+
37
+ ### TLDR
38
+
39
+ **As of 0.2.0:**
40
+
41
+ - This release completes the work that we started with release 0.1.0 by removing the dependency of `langchain` on `langchain-community`.
42
+ - `langchain` package no longer requires `langchain-community` . Instead `langchain-community` will now depend on `langchain-core` and `langchain` .
43
+ - User code that still relies on deprecated imports from `langchain` will continue to work as long `langchain_community` is installed. These imports will start raising errors in release 0.4.x.
44
+
45
+ **As of 0.1.0:**
46
+
47
+ - `langchain` was split into the following component packages: `langchain-core`, `langchain`, `langchain-community`, `langchain-[partner]` to improve the usability of langchain code in production settings. You can read more about it on our [blog](https://blog.langchain.dev/langchain-v0-1-0/).
48
+
49
+ ### Ecosystem organization
50
+
51
+ By the release of 0.1.0, LangChain had grown to a large ecosystem with many integrations and a large community.
52
+
53
+ To improve the usability of LangChain in production, we split the single `langchain` package into multiple packages. This allowed us to create a good foundation architecture for the LangChain ecosystem and improve the usability of `langchain` in production.
54
+
55
+ Here is the high level break down of the Eco-system:
56
+
57
+ - **langchain-core**: contains core abstractions involving LangChain Runnables, tooling for observability, and base implementations of important abstractions (e.g., Chat Models).
58
+ - **langchain:** contains generic code that is built using interfaces defined in `langchain-core`. This package is for code that generalizes well across different implementations of specific interfaces. For example, `create_tool_calling_agent` works across chat models that support [tool calling capabilities](https://blog.langchain.dev/tool-calling-with-langchain/).
59
+ - **langchain-community**: community maintained 3rd party integrations. Contains integrations based on interfaces defined in **langchain-core**. Maintained by the LangChain community.
60
+ - **Partner Packages (e.g., langchain-[partner])**: Partner packages are packages dedicated to especially popular integrations (e.g., `langchain-openai`, `langchain-anthropic` etc.). The dedicated packages generally benefit from better reliability and support.
61
+ - `langgraph`: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
62
+ - `langserve`: Deploy LangChain chains as REST APIs.
63
+
64
+
65
+ In the 0.1.0 release, `langchain-community` was retained as required a dependency of `langchain`.
66
+
67
+ This allowed imports of vectorstores, chat models, and other integrations to continue working through `langchain`
68
+ rather than forcing users to update all of their imports to `langchain-community`.
69
+
70
+ For the 0.2.0 release, we’re removing the dependency of `langchain` on `langchain-community`. This is something we’ve been planning to do since the 0.1 release because we believe this is the right package architecture.
71
+
72
+ Old imports will continue to work as long as `langchain-community` is installed. These imports will be removed in the 0.4.0 release.
73
+
74
+ To understand why we think breaking the dependency of `langchain` on `langchain-community` is best we should understand what each package is meant to do.
75
+
76
+ `langchain` is meant to contain high-level chains and agent architectures. The logic in these should be specified at the level of abstractions like `ChatModel` and `Retriever`, and should not be specific to any one integration. This has two main benefits:
77
+
78
+ 1. `langchain` is fairly lightweight. Here is the full list of required dependencies (after the split)
79
+
80
+ ```toml
81
+ python = ">=3.8.1,<4.0"
82
+ langchain-core = "^0.2.0"
83
+ langchain-text-splitters = ">=0.0.1,<0.1"
84
+ langsmith = "^0.1.17"
85
+ pydantic = ">=1,<3"
86
+ SQLAlchemy = ">=1.4,<3"
87
+ requests = "^2"
88
+ PyYAML = ">=5.3"
89
+ numpy = "^1"
90
+ aiohttp = "^3.8.3"
91
+ tenacity = "^8.1.0"
92
+ jsonpatch = "^1.33"
93
+ ```
94
+
95
+ 2. `langchain` chains/agents are largely integration-agnostic, which makes it easy to experiment with different integrations and future-proofs your code should there be issues with one specific integration.
96
+
97
+ There is also a third less tangible benefit which is that being integration-agnostic forces us to find only those very generic abstractions and architectures which generalize well across integrations. Given how general the abilities of the foundational tech are, and how quickly the space is moving, having generic architectures is a good way of future-proofing your applications.
98
+
99
+ `langchain-community` is intended to have all integration-specific components that are not yet being maintained in separate `langchain-{partner}` packages. Today this is still the majority of integrations and a lot of code. This code is primarily contributed by the community, while `langchain` is largely written by core maintainers. All of these integrations use optional dependencies and conditional imports, which prevents dependency bloat and conflicts but means compatible dependency versions are not made explicit. Given the volume of integrations in `langchain-community` and the speed at which integrations change, it’s very hard to follow semver versioning, and we currently don’t.
100
+
101
+ All of which is to say that there’s no large benefits to `langchain` depending on `langchain-community` and some obvious downsides: the functionality in `langchain` should be integration agnostic anyways, `langchain-community` can’t be properly versioned, and depending on `langchain-community` increases the [vulnerability surface](https://github.com/langchain-ai/langchain/discussions/19083) of `langchain`.
102
+
103
+ For more context about the reason for the organization please see our blog: https://blog.langchain.dev/langchain-v0-1-0/
langchain_md_files/versions/release_policy.mdx ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 2
3
+ sidebar_label: Release policy
4
+ ---
5
+
6
+ # LangChain release policy
7
+
8
+ The LangChain ecosystem is composed of different component packages (e.g., `langchain-core`, `langchain`, `langchain-community`, `langgraph`, `langserve`, partner packages etc.)
9
+
10
+ ## Versioning
11
+
12
+ ### `langchain`, `langchain-core`, and integration packages
13
+
14
+ `langchain`, `langchain-core`, `langchain-text-splitters`, and integration packages (`langchain-openai`, `langchain-anthropic`, etc.) follow [semantic versioning](https://semver.org/) in the format of 0.**Y**.**Z**. The packages are under rapid development, and so are currently versioning the packages with a major version of 0.
15
+
16
+ Minor version increases will occur for:
17
+
18
+ - Breaking changes for any public interfaces *not* marked as `beta`.
19
+
20
+ Patch version increases will occur for:
21
+
22
+ - Bug fixes,
23
+ - New features,
24
+ - Any changes to private interfaces,
25
+ - Any changes to `beta` features.
26
+
27
+ When upgrading between minor versions, users should review the list of breaking changes and deprecations.
28
+
29
+ From time to time, we will version packages as **release candidates**. These are versions that are intended to be released as stable versions, but we want to get feedback from the community before doing so. Release candidates will be versioned as 0.**Y**.**Z**rc**N**. For example, 0.2.0rc1. If no issues are found, the release candidate will be released as a stable version with the same version number. If issues are found, we will release a new release candidate with an incremented `N` value (e.g., 0.2.0rc2).
30
+
31
+ ### `langchain-community`
32
+
33
+ `langchain-community` is currently on version `0.2.x`.
34
+
35
+ Minor version increases will occur for:
36
+
37
+ - Updates to the major/minor versions of required `langchain-x` dependencies. E.g., when updating the required version of `langchain-core` from `^0.2.x` to `0.3.0`.
38
+
39
+ Patch version increases will occur for:
40
+
41
+ - Bug fixes,
42
+ - New features,
43
+ - Any changes to private interfaces,
44
+ - Any changes to `beta` features,
45
+ - Breaking changes to integrations to reflect breaking changes in the third-party service.
46
+
47
+ Whenever possible we will avoid making breaking changes in patch versions.
48
+ However, if an external API makes a breaking change then breaking changes to the corresponding `langchain-community` integration can occur in a patch version.
49
+
50
+ ### `langchain-experimental`
51
+
52
+ `langchain-experimental` is currently on version `0.0.x`. All changes will be accompanied with patch version increases.
53
+
54
+ ## Release cadence
55
+
56
+ We expect to space out **minor** releases (e.g., from 0.2.x to 0.3.0) of `langchain` and `langchain-core` by at least 2-3 months, as such releases may contain breaking changes.
57
+
58
+ Patch versions are released frequently, up to a few times per week, as they contain bug fixes and new features.
59
+
60
+ ## API stability
61
+
62
+ The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `langchain-core` will continue to evolve to better serve the needs of our users.
63
+
64
+ Even though both `langchain` and `langchain-core` are currently in a pre-1.0 state, we are committed to maintaining API stability in these packages.
65
+
66
+ - Breaking changes to the public API will result in a minor version bump (the second digit)
67
+ - Any bug fixes or new features will result in a patch version bump (the third digit)
68
+
69
+ We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed.
70
+
71
+ ### Stability of other packages
72
+
73
+ The stability of other packages in the LangChain ecosystem may vary:
74
+
75
+ - `langchain-community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `langchain-community`, `langchain-community` is expected to experience more breaking changes than `langchain` and `langchain-core` as it contains many community contributions.
76
+ - Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable.
77
+
78
+ ### What is a "API stability"?
79
+
80
+ API stability means:
81
+
82
+ - All the public APIs (everything in this documentation) will not be moved or renamed without providing backwards-compatible aliases.
83
+ - If new features are added to these APIs – which is quite possible – they will not break or change the meaning of existing methods. In other words, "stable" does not (necessarily) mean "complete."
84
+ - If, for some reason, an API declared stable must be removed or replaced, it will be declared deprecated but will remain in the API for at least two minor releases. Warnings will be issued when the deprecated method is called.
85
+
86
+ ### **APIs marked as internal**
87
+
88
+ Certain APIs are explicitly marked as “internal” in a couple of ways:
89
+
90
+ - Some documentation refers to internals and mentions them as such. If the documentation says that something is internal, it may change.
91
+ - Functions, methods, and other objects prefixed by a leading underscore (**`_`**). This is the standard Python convention of indicating that something is private; if any method starts with a single **`_`**, it’s an internal API.
92
+ - **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are *meant* to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain.
93
+
94
+ ## Deprecation policy
95
+
96
+ We will generally avoid deprecating features until a better alternative is available.
97
+
98
+ When a feature is deprecated, it will continue to work in the current and next minor version of `langchain` and `langchain-core`. After that, the feature will be removed.
99
+
100
+ Since we're expecting to space out minor releases by at least 2-3 months, this means that a feature can be removed within 2-6 months of being deprecated.
101
+
102
+ In some situations, we may allow the feature to remain in the code base for longer periods of time, if it's not causing issues in the packages, to reduce the burden on users.
langchain_md_files/versions/v0_2/deprecations.mdx ADDED
@@ -0,0 +1,902 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 3
3
+ sidebar_label: Changes
4
+ keywords: [retrievalqa, llmchain, conversationalretrievalchain]
5
+ ---
6
+
7
+ # Deprecations and Breaking Changes
8
+
9
+ This code contains a list of deprecations and removals in the `langchain` and `langchain-core` packages.
10
+
11
+ New features and improvements are not listed here. See the [overview](/docs/versions/overview/) for a summary of what's new in this release.
12
+
13
+ ## Breaking changes
14
+
15
+ As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly.
16
+
17
+ The following functions and classes require an explicit LLM to be passed as an argument:
18
+
19
+ - `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit`
20
+ - `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit`
21
+ - `langchain.chains.openai_functions.get_openapi_chain`
22
+ - `langchain.chains.router.MultiRetrievalQAChain.from_retrievers`
23
+ - `langchain.indexes.VectorStoreIndexWrapper.query`
24
+ - `langchain.indexes.VectorStoreIndexWrapper.query_with_sources`
25
+ - `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources`
26
+ - `langchain.chains.flare.FlareChain`
27
+
28
+
29
+ The following classes now require passing an explicit Embedding model as an argument:
30
+
31
+ - `langchain.indexes.VectostoreIndexCreator`
32
+
33
+ The following code has been removed:
34
+
35
+ - `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method.
36
+
37
+ Behavior was changed for the following code:
38
+
39
+
40
+ ### @tool decorator
41
+
42
+ `@tool` decorator now assigns the function doc-string as the tool description. Previously, the `@tool` decorator
43
+ using to prepend the function signature.
44
+
45
+ Before 0.2.0:
46
+
47
+ ```python
48
+ @tool
49
+ def my_tool(x: str) -> str:
50
+ """Some description."""
51
+ return "something"
52
+
53
+ print(my_tool.description)
54
+ ```
55
+
56
+ Would result in: `my_tool: (x: str) -> str - Some description.`
57
+
58
+ As of 0.2.0:
59
+
60
+ It will result in: `Some description.`
61
+
62
+ ## Code that moved to another package
63
+
64
+ Code that was moved from `langchain` into another package (e.g, `langchain-community`)
65
+
66
+ If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement.
67
+
68
+ ```shell
69
+ python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader"
70
+ ```
71
+
72
+ ```shell
73
+ LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports:
74
+
75
+ >> from langchain.document_loaders import UnstructuredMarkdownLoader
76
+
77
+ with new imports of:
78
+
79
+ >> from langchain_community.document_loaders import UnstructuredMarkdownLoader
80
+ ```
81
+
82
+ We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.)
83
+
84
+ However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, we’re releasing a migration script via the LangChain CLI. See further instructions in migration guide.
85
+
86
+ ## Code targeted for removal
87
+
88
+ Code that has better alternatives available and will eventually be removed, so there’s only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`).
89
+
90
+ ### astream events V1
91
+
92
+ If you are using `astream_events`, please review how to [migrate to astream events v2](/docs/versions/v0_2/migrating_astream_events).
93
+
94
+ ### langchain_core
95
+
96
+ #### try_load_from_hub
97
+
98
+
99
+ In module: `utils.loading`
100
+ Deprecated: 0.1.30
101
+ Removal: 0.3.0
102
+
103
+
104
+ Alternative: Using the hwchase17/langchain-hub repo for prompts is deprecated. Please use https://smith.langchain.com/hub instead.
105
+
106
+
107
+ #### BaseLanguageModel.predict
108
+
109
+
110
+ In module: `language_models.base`
111
+ Deprecated: 0.1.7
112
+ Removal: 0.3.0
113
+
114
+
115
+ Alternative: invoke
116
+
117
+
118
+ #### BaseLanguageModel.predict_messages
119
+
120
+
121
+ In module: `language_models.base`
122
+ Deprecated: 0.1.7
123
+ Removal: 0.3.0
124
+
125
+
126
+ Alternative: invoke
127
+
128
+
129
+ #### BaseLanguageModel.apredict
130
+
131
+
132
+ In module: `language_models.base`
133
+ Deprecated: 0.1.7
134
+ Removal: 0.3.0
135
+
136
+
137
+ Alternative: ainvoke
138
+
139
+
140
+ #### BaseLanguageModel.apredict_messages
141
+
142
+
143
+ In module: `language_models.base`
144
+ Deprecated: 0.1.7
145
+ Removal: 0.3.0
146
+
147
+
148
+ Alternative: ainvoke
149
+
150
+
151
+ #### RunTypeEnum
152
+
153
+
154
+ In module: `tracers.schemas`
155
+ Deprecated: 0.1.0
156
+ Removal: 0.3.0
157
+
158
+
159
+ Alternative: Use string instead.
160
+
161
+
162
+ #### TracerSessionV1Base
163
+
164
+
165
+ In module: `tracers.schemas`
166
+ Deprecated: 0.1.0
167
+ Removal: 0.3.0
168
+
169
+
170
+ Alternative:
171
+
172
+
173
+ #### TracerSessionV1Create
174
+
175
+
176
+ In module: `tracers.schemas`
177
+ Deprecated: 0.1.0
178
+ Removal: 0.3.0
179
+
180
+
181
+ Alternative:
182
+
183
+
184
+ #### TracerSessionV1
185
+
186
+
187
+ In module: `tracers.schemas`
188
+ Deprecated: 0.1.0
189
+ Removal: 0.3.0
190
+
191
+
192
+ Alternative:
193
+
194
+
195
+ #### TracerSessionBase
196
+
197
+
198
+ In module: `tracers.schemas`
199
+ Deprecated: 0.1.0
200
+ Removal: 0.3.0
201
+
202
+
203
+ Alternative:
204
+
205
+
206
+ #### TracerSession
207
+
208
+
209
+ In module: `tracers.schemas`
210
+ Deprecated: 0.1.0
211
+ Removal: 0.3.0
212
+
213
+
214
+ Alternative:
215
+
216
+
217
+ #### BaseRun
218
+
219
+
220
+ In module: `tracers.schemas`
221
+ Deprecated: 0.1.0
222
+ Removal: 0.3.0
223
+
224
+
225
+ Alternative: Run
226
+
227
+
228
+ #### LLMRun
229
+
230
+
231
+ In module: `tracers.schemas`
232
+ Deprecated: 0.1.0
233
+ Removal: 0.3.0
234
+
235
+
236
+ Alternative: Run
237
+
238
+
239
+ #### ChainRun
240
+
241
+
242
+ In module: `tracers.schemas`
243
+ Deprecated: 0.1.0
244
+ Removal: 0.3.0
245
+
246
+
247
+ Alternative: Run
248
+
249
+
250
+ #### ToolRun
251
+
252
+
253
+ In module: `tracers.schemas`
254
+ Deprecated: 0.1.0
255
+ Removal: 0.3.0
256
+
257
+
258
+ Alternative: Run
259
+
260
+
261
+ #### BaseChatModel.__call__
262
+
263
+
264
+ In module: `language_models.chat_models`
265
+ Deprecated: 0.1.7
266
+ Removal: 0.3.0
267
+
268
+
269
+ Alternative: invoke
270
+
271
+
272
+ #### BaseChatModel.call_as_llm
273
+
274
+
275
+ In module: `language_models.chat_models`
276
+ Deprecated: 0.1.7
277
+ Removal: 0.3.0
278
+
279
+
280
+ Alternative: invoke
281
+
282
+
283
+ #### BaseChatModel.predict
284
+
285
+
286
+ In module: `language_models.chat_models`
287
+ Deprecated: 0.1.7
288
+ Removal: 0.3.0
289
+
290
+
291
+ Alternative: invoke
292
+
293
+
294
+ #### BaseChatModel.predict_messages
295
+
296
+
297
+ In module: `language_models.chat_models`
298
+ Deprecated: 0.1.7
299
+ Removal: 0.3.0
300
+
301
+
302
+ Alternative: invoke
303
+
304
+
305
+ #### BaseChatModel.apredict
306
+
307
+
308
+ In module: `language_models.chat_models`
309
+ Deprecated: 0.1.7
310
+ Removal: 0.3.0
311
+
312
+
313
+ Alternative: ainvoke
314
+
315
+
316
+ #### BaseChatModel.apredict_messages
317
+
318
+
319
+ In module: `language_models.chat_models`
320
+ Deprecated: 0.1.7
321
+ Removal: 0.3.0
322
+
323
+
324
+ Alternative: ainvoke
325
+
326
+
327
+ #### BaseLLM.__call__
328
+
329
+
330
+ In module: `language_models.llms`
331
+ Deprecated: 0.1.7
332
+ Removal: 0.3.0
333
+
334
+
335
+ Alternative: invoke
336
+
337
+
338
+ #### BaseLLM.predict
339
+
340
+
341
+ In module: `language_models.llms`
342
+ Deprecated: 0.1.7
343
+ Removal: 0.3.0
344
+
345
+
346
+ Alternative: invoke
347
+
348
+
349
+ #### BaseLLM.predict_messages
350
+
351
+
352
+ In module: `language_models.llms`
353
+ Deprecated: 0.1.7
354
+ Removal: 0.3.0
355
+
356
+
357
+ Alternative: invoke
358
+
359
+
360
+ #### BaseLLM.apredict
361
+
362
+
363
+ In module: `language_models.llms`
364
+ Deprecated: 0.1.7
365
+ Removal: 0.3.0
366
+
367
+
368
+ Alternative: ainvoke
369
+
370
+
371
+ #### BaseLLM.apredict_messages
372
+
373
+
374
+ In module: `language_models.llms`
375
+ Deprecated: 0.1.7
376
+ Removal: 0.3.0
377
+
378
+
379
+ Alternative: ainvoke
380
+
381
+
382
+ #### BaseRetriever.get_relevant_documents
383
+
384
+
385
+ In module: `retrievers`
386
+ Deprecated: 0.1.46
387
+ Removal: 0.3.0
388
+
389
+
390
+ Alternative: invoke
391
+
392
+
393
+ #### BaseRetriever.aget_relevant_documents
394
+
395
+
396
+ In module: `retrievers`
397
+ Deprecated: 0.1.46
398
+ Removal: 0.3.0
399
+
400
+
401
+ Alternative: ainvoke
402
+
403
+
404
+ #### ChatPromptTemplate.from_role_strings
405
+
406
+
407
+ In module: `prompts.chat`
408
+ Deprecated: 0.0.1
409
+ Removal:
410
+
411
+
412
+ Alternative: from_messages classmethod
413
+
414
+
415
+ #### ChatPromptTemplate.from_strings
416
+
417
+
418
+ In module: `prompts.chat`
419
+ Deprecated: 0.0.1
420
+ Removal:
421
+
422
+
423
+ Alternative: from_messages classmethod
424
+
425
+
426
+ #### BaseTool.__call__
427
+
428
+
429
+ In module: `tools`
430
+ Deprecated: 0.1.47
431
+ Removal: 0.3.0
432
+
433
+
434
+ Alternative: invoke
435
+
436
+
437
+ #### convert_pydantic_to_openai_function
438
+
439
+
440
+ In module: `utils.function_calling`
441
+ Deprecated: 0.1.16
442
+ Removal: 0.3.0
443
+
444
+
445
+ Alternative: langchain_core.utils.function_calling.convert_to_openai_function()
446
+
447
+
448
+ #### convert_pydantic_to_openai_tool
449
+
450
+
451
+ In module: `utils.function_calling`
452
+ Deprecated: 0.1.16
453
+ Removal: 0.3.0
454
+
455
+
456
+ Alternative: langchain_core.utils.function_calling.convert_to_openai_tool()
457
+
458
+
459
+ #### convert_python_function_to_openai_function
460
+
461
+
462
+ In module: `utils.function_calling`
463
+ Deprecated: 0.1.16
464
+ Removal: 0.3.0
465
+
466
+
467
+ Alternative: langchain_core.utils.function_calling.convert_to_openai_function()
468
+
469
+
470
+ #### format_tool_to_openai_function
471
+
472
+
473
+ In module: `utils.function_calling`
474
+ Deprecated: 0.1.16
475
+ Removal: 0.3.0
476
+
477
+
478
+ Alternative: langchain_core.utils.function_calling.convert_to_openai_function()
479
+
480
+
481
+ #### format_tool_to_openai_tool
482
+
483
+
484
+ In module: `utils.function_calling`
485
+ Deprecated: 0.1.16
486
+ Removal: 0.3.0
487
+
488
+
489
+ Alternative: langchain_core.utils.function_calling.convert_to_openai_tool()
490
+
491
+
492
+ ### langchain
493
+
494
+
495
+ #### AgentType
496
+
497
+
498
+ In module: `agents.agent_types`
499
+ Deprecated: 0.1.0
500
+ Removal: 0.3.0
501
+
502
+
503
+ Alternative: Use [LangGraph](/docs/how_to/migrate_agent/) or new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc.
504
+
505
+
506
+ #### Chain.__call__
507
+
508
+
509
+ In module: `chains.base`
510
+ Deprecated: 0.1.0
511
+ Removal: 0.3.0
512
+
513
+
514
+ Alternative: invoke
515
+
516
+
517
+ #### Chain.acall
518
+
519
+
520
+ In module: `chains.base`
521
+ Deprecated: 0.1.0
522
+ Removal: 0.3.0
523
+
524
+
525
+ Alternative: ainvoke
526
+
527
+
528
+ #### Chain.run
529
+
530
+
531
+ In module: `chains.base`
532
+ Deprecated: 0.1.0
533
+ Removal: 0.3.0
534
+
535
+
536
+ Alternative: invoke
537
+
538
+
539
+ #### Chain.arun
540
+
541
+
542
+ In module: `chains.base`
543
+ Deprecated: 0.1.0
544
+ Removal: 0.3.0
545
+
546
+
547
+ Alternative: ainvoke
548
+
549
+
550
+ #### Chain.apply
551
+
552
+
553
+ In module: `chains.base`
554
+ Deprecated: 0.1.0
555
+ Removal: 0.3.0
556
+
557
+
558
+ Alternative: batch
559
+
560
+
561
+ #### LLMChain
562
+
563
+
564
+ In module: `chains.llm`
565
+ Deprecated: 0.1.17
566
+ Removal: 0.3.0
567
+
568
+
569
+ Alternative: [RunnableSequence](/docs/how_to/sequence/), e.g., `prompt | llm`
570
+
571
+ This [migration guide](/docs/versions/migrating_chains/llm_chain) has a side-by-side comparison.
572
+
573
+
574
+ #### LLMSingleActionAgent
575
+
576
+
577
+ In module: `agents.agent`
578
+ Deprecated: 0.1.0
579
+ Removal: 0.3.0
580
+
581
+
582
+ Alternative: Use [LangGraph](/docs/how_to/migrate_agent/) or new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc.
583
+
584
+
585
+ #### Agent
586
+
587
+
588
+ In module: `agents.agent`
589
+ Deprecated: 0.1.0
590
+ Removal: 0.3.0
591
+
592
+
593
+ Alternative: Use [LangGraph](/docs/how_to/migrate_agent/) or new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc.
594
+
595
+
596
+ #### OpenAIFunctionsAgent
597
+
598
+
599
+ In module: `agents.openai_functions_agent.base`
600
+ Deprecated: 0.1.0
601
+ Removal: 0.3.0
602
+
603
+
604
+ Alternative: create_openai_functions_agent
605
+
606
+
607
+ #### ZeroShotAgent
608
+
609
+
610
+ In module: `agents.mrkl.base`
611
+ Deprecated: 0.1.0
612
+ Removal: 0.3.0
613
+
614
+
615
+ Alternative: create_react_agent
616
+
617
+
618
+ #### MRKLChain
619
+
620
+
621
+ In module: `agents.mrkl.base`
622
+ Deprecated: 0.1.0
623
+ Removal: 0.3.0
624
+
625
+
626
+ Alternative:
627
+
628
+
629
+ #### ConversationalAgent
630
+
631
+
632
+ In module: `agents.conversational.base`
633
+ Deprecated: 0.1.0
634
+ Removal: 0.3.0
635
+
636
+
637
+ Alternative: create_react_agent
638
+
639
+
640
+ #### ConversationalChatAgent
641
+
642
+
643
+ In module: `agents.conversational_chat.base`
644
+ Deprecated: 0.1.0
645
+ Removal: 0.3.0
646
+
647
+
648
+ Alternative: create_json_chat_agent
649
+
650
+
651
+ #### ChatAgent
652
+
653
+
654
+ In module: `agents.chat.base`
655
+ Deprecated: 0.1.0
656
+ Removal: 0.3.0
657
+
658
+
659
+ Alternative: create_react_agent
660
+
661
+
662
+ #### OpenAIMultiFunctionsAgent
663
+
664
+
665
+ In module: `agents.openai_functions_multi_agent.base`
666
+ Deprecated: 0.1.0
667
+ Removal: 0.3.0
668
+
669
+
670
+ Alternative: create_openai_tools_agent
671
+
672
+
673
+ #### ReActDocstoreAgent
674
+
675
+
676
+ In module: `agents.react.base`
677
+ Deprecated: 0.1.0
678
+ Removal: 0.3.0
679
+
680
+
681
+ Alternative:
682
+
683
+
684
+ #### DocstoreExplorer
685
+
686
+
687
+ In module: `agents.react.base`
688
+ Deprecated: 0.1.0
689
+ Removal: 0.3.0
690
+
691
+
692
+ Alternative:
693
+
694
+
695
+ #### ReActTextWorldAgent
696
+
697
+
698
+ In module: `agents.react.base`
699
+ Deprecated: 0.1.0
700
+ Removal: 0.3.0
701
+
702
+
703
+ Alternative:
704
+
705
+
706
+ #### ReActChain
707
+
708
+
709
+ In module: `agents.react.base`
710
+ Deprecated: 0.1.0
711
+ Removal: 0.3.0
712
+
713
+
714
+ Alternative:
715
+
716
+
717
+ #### SelfAskWithSearchAgent
718
+
719
+
720
+ In module: `agents.self_ask_with_search.base`
721
+ Deprecated: 0.1.0
722
+ Removal: 0.3.0
723
+
724
+
725
+ Alternative: create_self_ask_with_search
726
+
727
+
728
+ #### SelfAskWithSearchChain
729
+
730
+
731
+ In module: `agents.self_ask_with_search.base`
732
+ Deprecated: 0.1.0
733
+ Removal: 0.3.0
734
+
735
+
736
+ Alternative:
737
+
738
+
739
+ #### StructuredChatAgent
740
+
741
+
742
+ In module: `agents.structured_chat.base`
743
+ Deprecated: 0.1.0
744
+ Removal: 0.3.0
745
+
746
+
747
+ Alternative: create_structured_chat_agent
748
+
749
+
750
+ #### RetrievalQA
751
+
752
+
753
+ In module: `chains.retrieval_qa.base`
754
+ Deprecated: 0.1.17
755
+ Removal: 0.3.0
756
+
757
+
758
+ Alternative: [create_retrieval_chain](https://python.langchain.com/v0.2/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain)
759
+ This [migration guide](/docs/versions/migrating_chains/retrieval_qa) has a side-by-side comparison.
760
+
761
+
762
+ #### load_agent_from_config
763
+
764
+
765
+ In module: `agents.loading`
766
+ Deprecated: 0.1.0
767
+ Removal: 0.3.0
768
+
769
+
770
+ Alternative:
771
+
772
+
773
+ #### load_agent
774
+
775
+
776
+ In module: `agents.loading`
777
+ Deprecated: 0.1.0
778
+ Removal: 0.3.0
779
+
780
+
781
+ Alternative:
782
+
783
+
784
+ #### initialize_agent
785
+
786
+
787
+ In module: `agents.initialize`
788
+ Deprecated: 0.1.0
789
+ Removal: 0.3.0
790
+
791
+
792
+ Alternative: Use [LangGraph](/docs/how_to/migrate_agent/) or new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc.
793
+
794
+
795
+ #### XMLAgent
796
+
797
+
798
+ In module: `agents.xml.base`
799
+ Deprecated: 0.1.0
800
+ Removal: 0.3.0
801
+
802
+
803
+ Alternative: create_xml_agent
804
+
805
+
806
+ #### CohereRerank
807
+
808
+
809
+ In module: `retrievers.document_compressors.cohere_rerank`
810
+ Deprecated: 0.0.30
811
+ Removal: 0.3.0
812
+
813
+
814
+ Alternative: langchain_cohere.CohereRerank
815
+
816
+
817
+ #### ConversationalRetrievalChain
818
+
819
+
820
+ In module: `chains.conversational_retrieval.base`
821
+ Deprecated: 0.1.17
822
+ Removal: 0.3.0
823
+
824
+
825
+ Alternative: [create_history_aware_retriever](https://python.langchain.com/v0.2/api_reference/langchain/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) together with [create_retrieval_chain](https://python.langchain.com/v0.2/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) (see example in docstring)
826
+ This [migration guide](/docs/versions/migrating_chains/conversation_retrieval_chain) has a side-by-side comparison.
827
+
828
+
829
+ #### create_extraction_chain_pydantic
830
+
831
+
832
+ In module: `chains.openai_tools.extraction`
833
+ Deprecated: 0.1.14
834
+ Removal: 0.3.0
835
+
836
+
837
+ Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
838
+
839
+
840
+ #### create_openai_fn_runnable
841
+
842
+
843
+ In module: `chains.structured_output.base`
844
+ Deprecated: 0.1.14
845
+ Removal: 0.3.0
846
+
847
+
848
+ Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
849
+
850
+
851
+ #### create_structured_output_runnable
852
+
853
+
854
+ In module: `chains.structured_output.base`
855
+ Deprecated: 0.1.17
856
+ Removal: 0.3.0
857
+
858
+
859
+ Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
860
+
861
+
862
+ #### create_openai_fn_chain
863
+
864
+
865
+ In module: `chains.openai_functions.base`
866
+ Deprecated: 0.1.1
867
+ Removal: 0.3.0
868
+
869
+
870
+ Alternative: create_openai_fn_runnable
871
+
872
+
873
+ #### create_structured_output_chain
874
+
875
+
876
+ In module: `chains.openai_functions.base`
877
+ Deprecated: 0.1.1
878
+ Removal: 0.3.0
879
+
880
+ Alternative: ChatOpenAI.with_structured_output
881
+
882
+
883
+ #### create_extraction_chain
884
+
885
+
886
+ In module: `chains.openai_functions.extraction`
887
+ Deprecated: 0.1.14
888
+ Removal: 0.3.0
889
+
890
+
891
+ Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
892
+
893
+
894
+ #### create_extraction_chain_pydantic
895
+
896
+
897
+ In module: `chains.openai_functions.extraction`
898
+ Deprecated: 0.1.14
899
+ Removal: 0.3.0
900
+
901
+
902
+ Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
langchain_md_files/versions/v0_2/index.mdx ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 1
3
+ ---
4
+
5
+ # Migrating to LangChain v0.2
6
+
7
+
8
+
9
+ LangChain v0.2 was released in May 2024. This release includes a number of [breaking changes and deprecations](/docs/versions/v0_2/deprecations). This document contains a guide on upgrading to 0.2.x.
10
+
11
+ :::note Reference
12
+
13
+ - [Breaking Changes & Deprecations](/docs/versions/v0_2/deprecations)
14
+ - [Migrating legacy chains to LCEL](/docs/versions/migrating_chains)
15
+ - [Migrating to Astream Events v2](/docs/versions/v0_2/migrating_astream_events)
16
+
17
+ :::
18
+
19
+ # Migration
20
+
21
+ This documentation will help you upgrade your code to LangChain `0.2.x.`. To prepare for migration, we first recommend you take the following steps:
22
+
23
+ 1. Install the 0.2.x versions of langchain-core, langchain and upgrade to recent versions of other packages that you may be using. (e.g. langgraph, langchain-community, langchain-openai, etc.)
24
+ 2. Verify that your code runs properly with the new packages (e.g., unit tests pass).
25
+ 3. Install a recent version of `langchain-cli` , and use the tool to replace old imports used by your code with the new imports. (See instructions below.)
26
+ 4. Manually resolve any remaining deprecation warnings.
27
+ 5. Re-run unit tests.
28
+ 6. If you are using `astream_events`, please review how to [migrate to astream events v2](/docs/versions/v0_2/migrating_astream_events).
29
+
30
+ ## Upgrade to new imports
31
+
32
+ We created a tool to help migrate your code. This tool is still in **beta** and may not cover all cases, but
33
+ we hope that it will help you migrate your code more quickly.
34
+
35
+ The migration script has the following limitations:
36
+
37
+ 1. It’s limited to helping users move from old imports to new imports. It does not help address other deprecations.
38
+ 2. It can’t handle imports that involve `as` .
39
+ 3. New imports are always placed in global scope, even if the old import that was replaced was located inside some local scope (e..g, function body).
40
+ 4. It will likely miss some deprecated imports.
41
+
42
+ Here is an example of the import changes that the migration script can help apply automatically:
43
+
44
+
45
+ | From Package | To Package | Deprecated Import | New Import |
46
+ |---------------------|--------------------------|--------------------------------------------------------------------|---------------------------------------------------------------------|
47
+ | langchain | langchain-community | from langchain.vectorstores import InMemoryVectorStore | from langchain_community.vectorstores import InMemoryVectorStore |
48
+ | langchain-community | langchain_openai | from langchain_community.chat_models import ChatOpenAI | from langchain_openai import ChatOpenAI |
49
+ | langchain-community | langchain-core | from langchain_community.document_loaders import Blob | from langchain_core.document_loaders import Blob |
50
+ | langchain | langchain-core | from langchain.schema.document import Document | from langchain_core.documents import Document |
51
+ | langchain | langchain-text-splitters | from langchain.text_splitter import RecursiveCharacterTextSplitter | from langchain_text_splitters import RecursiveCharacterTextSplitter |
52
+
53
+
54
+ ## Installation
55
+
56
+ ```bash
57
+ pip install langchain-cli
58
+ langchain-cli --version # <-- Make sure the version is at least 0.0.22
59
+ ```
60
+
61
+ ## Usage
62
+
63
+ Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`).
64
+
65
+ You will need to run the migration script **twice** as it only applies one import replacement per run.
66
+
67
+ For example, say your code still uses `from langchain.chat_models import ChatOpenAI`:
68
+
69
+ After the first run, you’ll get: `from langchain_community.chat_models import ChatOpenAI`
70
+ After the second run, you’ll get: `from langchain_openai import ChatOpenAI`
71
+
72
+ ```bash
73
+ # Run a first time
74
+ # Will replace from langchain.chat_models import ChatOpenAI
75
+ langchain-cli migrate --diff [path to code] # Preview
76
+ langchain-cli migrate [path to code] # Apply
77
+
78
+ # Run a second time to apply more import replacements
79
+ langchain-cli migrate --diff [path to code] # Preview
80
+ langchain-cli migrate [path to code] # Apply
81
+ ```
82
+
83
+ ### Other options
84
+
85
+ ```bash
86
+ # See help menu
87
+ langchain-cli migrate --help
88
+ # Preview Changes without applying
89
+ langchain-cli migrate --diff [path to code]
90
+ # Run on code including ipython notebooks
91
+ # Apply all import updates except for updates from langchain to langchain-core
92
+ langchain-cli migrate --disable langchain_to_core --include-ipynb [path to code]
93
+ ```
langchain_md_files/versions/v0_2/migrating_astream_events.mdx ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ sidebar_position: 2
3
+ sidebar_label: astream_events v2
4
+ ---
5
+
6
+ # Migrating to Astream Events v2
7
+
8
+ We've added a `v2` of the astream_events API with the release of `0.2.x`. You can see this [PR](https://github.com/langchain-ai/langchain/pull/21638) for more details.
9
+
10
+ The `v2` version is a re-write of the `v1` version, and should be more efficient, with more consistent output for the events. The `v1` version of the API will be deprecated in favor of the `v2` version and will be removed in `0.4.0`.
11
+
12
+ Below is a list of changes between the `v1` and `v2` versions of the API.
13
+
14
+
15
+ ### output for `on_chat_model_end`
16
+
17
+ In `v1`, the outputs associated with `on_chat_model_end` changed depending on whether the
18
+ chat model was run as a root level runnable or as part of a chain.
19
+
20
+ As a root level runnable the output was:
21
+
22
+ ```python
23
+ "data": {"output": AIMessageChunk(content="hello world!", id='some id')}
24
+ ```
25
+
26
+ As part of a chain the output was:
27
+
28
+ ```
29
+ "data": {
30
+ "output": {
31
+ "generations": [
32
+ [
33
+ {
34
+ "generation_info": None,
35
+ "message": AIMessageChunk(
36
+ content="hello world!", id=AnyStr()
37
+ ),
38
+ "text": "hello world!",
39
+ "type": "ChatGenerationChunk",
40
+ }
41
+ ]
42
+ ],
43
+ "llm_output": None,
44
+ }
45
+ },
46
+ ```
47
+
48
+
49
+ As of `v2`, the output will always be the simpler representation:
50
+
51
+ ```python
52
+ "data": {"output": AIMessageChunk(content="hello world!", id='some id')}
53
+ ```
54
+
55
+ :::note
56
+ Non chat models (i.e., regular LLMs) are will be consistently associated with the more verbose format for now.
57
+ :::
58
+
59
+ ### output for `on_retriever_end`
60
+
61
+ `on_retriever_end` output will always return a list of `Documents`.
62
+
63
+ Before:
64
+ ```python
65
+ {
66
+ "data": {
67
+ "output": [
68
+ Document(...),
69
+ Document(...),
70
+ ...
71
+ ]
72
+ }
73
+ }
74
+ ```
75
+
76
+ ### Removed `on_retriever_stream`
77
+
78
+ The `on_retriever_stream` event was an artifact of the implementation and has been removed.
79
+
80
+ Full information associated with the event is already available in the `on_retriever_end` event.
81
+
82
+ Please use `on_retriever_end` instead.
83
+
84
+ ### Removed `on_tool_stream`
85
+
86
+ The `on_tool_stream` event was an artifact of the implementation and has been removed.
87
+
88
+ Full information associated with the event is already available in the `on_tool_end` event.
89
+
90
+ Please use `on_tool_end` instead.
91
+
92
+ ### Propagating Names
93
+
94
+ Names of runnables have been updated to be more consistent.
95
+
96
+ ```python
97
+ model = GenericFakeChatModel(messages=infinite_cycle).configurable_fields(
98
+ messages=ConfigurableField(
99
+ id="messages",
100
+ name="Messages",
101
+ description="Messages return by the LLM",
102
+ )
103
+ )
104
+ ```
105
+
106
+ In `v1`, the event name was `RunnableConfigurableFields`.
107
+
108
+ In `v2`, the event name is `GenericFakeChatModel`.
109
+
110
+ If you're filtering by event names, check if you need to update your filters.
111
+
112
+ ### RunnableRetry
113
+
114
+ Usage of [RunnableRetry](https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.retry.RunnableRetry.html)
115
+ within an LCEL chain being streamed generated an incorrect `on_chain_end` event in `v1` corresponding
116
+ to the failed runnable invocation that was being retried. This event has been removed in `v2`.
117
+
118
+ No action is required for this change.
openai-cookbook_md_files/How_to_build_an_agent_with_the_node_sdk.mdx ADDED
@@ -0,0 +1,492 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How to build an agent with the Node.js SDK
2
+
3
+ OpenAI functions enable your app to take action based on user inputs. This means that it can, e.g., search the web, send emails, or book tickets on behalf of your users, making it more powerful than a regular chatbot.
4
+
5
+ In this tutorial, you will build an app that uses OpenAI functions along with the latest version of the Node.js SDK. The app runs in the browser, so you only need a code editor and, e.g., VS Code Live Server to follow along locally. Alternatively, write your code directly in the browser via [this code playground at Scrimba.](https://scrimba.com/scrim/c6r3LkU9)
6
+
7
+ ## What you will build
8
+
9
+ Our app is a simple agent that helps you find activities in your area.
10
+ It has access to two functions, `getLocation()` and `getCurrentWeather()`,
11
+ which means it can figure out where you’re located and what the weather
12
+ is at the moment.
13
+
14
+ At this point, it's important to understand that
15
+ OpenAI doesn't execute any code for you. It just tells your app which
16
+ functions it should use in a given scenario, and then leaves it up to
17
+ your app to invoke them.
18
+
19
+ Once our agent knows your location and the weather, it'll use GPT’s
20
+ internal knowledge to suggest suitable local activities for you.
21
+
22
+ ## Importing the SDK and authenticating with OpenAI
23
+
24
+ We start by importing the OpenAI SDK at the top of our JavaScript file and authenticate with our API key, which we have stored as an environment variable.
25
+
26
+ ```js
27
+ import OpenAI from "openai";
28
+
29
+ const openai = new OpenAI({
30
+ apiKey: process.env.OPENAI_API_KEY,
31
+ dangerouslyAllowBrowser: true,
32
+ });
33
+ ```
34
+
35
+ Since we're running our code in a browser environment at Scrimba, we also need to set `dangerouslyAllowBrowser: true` to confirm we understand the risks involved with client-side API requests. Please note that you should move these requests over to a Node server in a production app.
36
+
37
+ ## Creating our two functions
38
+
39
+ Next, we'll create the two functions. The first one - `getLocation` -
40
+ uses the [IP API](https://ipapi.co/) to get the location of the
41
+ user.
42
+
43
+ ```js
44
+ async function getLocation() {
45
+ const response = await fetch("https://ipapi.co/json/");
46
+ const locationData = await response.json();
47
+ return locationData;
48
+ }
49
+ ```
50
+
51
+ The IP API returns a bunch of data about your location, including your
52
+ latitude and longitude, which we’ll use as arguments in the second
53
+ function `getCurrentWeather`. It uses the [Open Meteo
54
+ API](https://open-meteo.com/) to get the current weather data, like
55
+ this:
56
+
57
+ ```js
58
+ async function getCurrentWeather(latitude, longitude) {
59
+ const url = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&hourly=apparent_temperature`;
60
+ const response = await fetch(url);
61
+ const weatherData = await response.json();
62
+ return weatherData;
63
+ }
64
+ ```
65
+
66
+ ## Describing our functions for OpenAI
67
+
68
+ For OpenAI to understand the purpose of these functions, we need to
69
+ describe them using a specific schema. We'll create an array called
70
+ `tools` that contains one object per function. Each object
71
+ will have two keys: `type`, `function`, and the `function` key has
72
+ three subkeys: `name`, `description`, and `parameters`.
73
+
74
+ ```js
75
+ const tools = [
76
+ {
77
+ type: "function",
78
+ function: {
79
+ name: "getCurrentWeather",
80
+ description: "Get the current weather in a given location",
81
+ parameters: {
82
+ type: "object",
83
+ properties: {
84
+ latitude: {
85
+ type: "string",
86
+ },
87
+ longitude: {
88
+ type: "string",
89
+ },
90
+ },
91
+ required: ["longitude", "latitude"],
92
+ },
93
+ }
94
+ },
95
+ {
96
+ type: "function",
97
+ function: {
98
+ name: "getLocation",
99
+ description: "Get the user's location based on their IP address",
100
+ parameters: {
101
+ type: "object",
102
+ properties: {},
103
+ },
104
+ }
105
+ },
106
+ ];
107
+ ```
108
+
109
+ ## Setting up the messages array
110
+
111
+ We also need to define a `messages` array. This will keep track of all of the messages back and forth between our app and OpenAI.
112
+
113
+ The first object in the array should always have the `role` property set to `"system"`, which tells OpenAI that this is how we want it to behave.
114
+
115
+ ```js
116
+ const messages = [
117
+ {
118
+ role: "system",
119
+ content:
120
+ "You are a helpful assistant. Only use the functions you have been provided with.",
121
+ },
122
+ ];
123
+ ```
124
+
125
+ ## Creating the agent function
126
+
127
+ We are now ready to build the logic of our app, which lives in the
128
+ `agent` function. It is asynchronous and takes one argument: the
129
+ `userInput`.
130
+
131
+ We start by pushing the `userInput` to the messages array. This time, we set the `role` to `"user"`, so that OpenAI knows that this is the input from the user.
132
+
133
+ ```js
134
+ async function agent(userInput) {
135
+ messages.push({
136
+ role: "user",
137
+ content: userInput,
138
+ });
139
+ const response = await openai.chat.completions.create({
140
+ model: "gpt-4",
141
+ messages: messages,
142
+ tools: tools,
143
+ });
144
+ console.log(response);
145
+ }
146
+ ```
147
+
148
+ Next, we'll send a request to the Chat completions endpoint via the
149
+ `chat.completions.create()` method in the Node SDK. This method takes a
150
+ configuration object as an argument. In it, we'll specify three
151
+ properties:
152
+
153
+ - `model` - Decides which AI model we want to use (in our case,
154
+ GPT-4).
155
+ - `messages` - The entire history of messages between the user and the
156
+ AI up until this point.
157
+ - `tools` - A list of tools the model may call. Currently, only
158
+ functions are supported as a tool., we'll we use the `tools` array we
159
+ created earlier.
160
+
161
+ ## Running our app with a simple input
162
+
163
+ Let's try to run the `agent` with an input that requires a function call to give a suitable reply.
164
+
165
+ ```js
166
+ agent("Where am I located right now?");
167
+ ```
168
+
169
+ When we run the code above, we see the response from OpenAI logged out
170
+ to the console like this:
171
+
172
+ ```js
173
+ {
174
+ id: "chatcmpl-84ojoEJtyGnR6jRHK2Dl4zTtwsa7O",
175
+ object: "chat.completion",
176
+ created: 1696159040,
177
+ model: "gpt-4-0613",
178
+ choices: [{
179
+ index: 0,
180
+ message: {
181
+ role: "assistant",
182
+ content: null,
183
+ tool_calls: [
184
+ id: "call_CBwbo9qoXUn1kTR5pPuv6vR1",
185
+ type: "function",
186
+ function: {
187
+ name: "getLocation",
188
+ arguments: "{}"
189
+ }
190
+ ]
191
+ },
192
+ logprobs: null,
193
+ finish_reason: "tool_calls" // OpenAI wants us to call a function
194
+ }],
195
+ usage: {
196
+ prompt_tokens: 134,
197
+ completion_tokens: 6,
198
+ total_tokens: 140
199
+ }
200
+ system_fingerprint: null
201
+ }
202
+ ```
203
+
204
+ This response tells us that we should call one of our functions, as it contains the following key: `finish_reason: "tool_calls"`.
205
+
206
+ The name of the function can be found in the
207
+ `response.choices[0].message.tool_calls[0].function.name` key, which is set to
208
+ `"getLocation"`.
209
+
210
+ ## Turning the OpenAI response into a function call
211
+
212
+ Now that we have the name of the function as a string, we'll need to
213
+ translate that into a function call. To help us with that, we'll gather
214
+ both of our functions in an object called `availableTools`:
215
+
216
+ ```js
217
+ const availableTools = {
218
+ getCurrentWeather,
219
+ getLocation,
220
+ };
221
+ ```
222
+
223
+ This is handy because we'll be able to access the `getLocation` function
224
+ via bracket notation and the string we got back from OpenAI, like this:
225
+ `availableTools["getLocation"]`.
226
+
227
+ ```js
228
+ const { finish_reason, message } = response.choices[0];
229
+
230
+ if (finish_reason === "tool_calls" && message.tool_calls) {
231
+ const functionName = message.tool_calls[0].function.name;
232
+ const functionToCall = availableTools[functionName];
233
+ const functionArgs = JSON.parse(message.tool_calls[0].function.arguments);
234
+ const functionArgsArr = Object.values(functionArgs);
235
+ const functionResponse = await functionToCall.apply(null, functionArgsArr);
236
+ console.log(functionResponse);
237
+ }
238
+ ```
239
+
240
+ We're also grabbing ahold of any arguments OpenAI wants us to pass into
241
+ the function: `message.tool_calls[0].function.arguments`.
242
+ However, we won't need any arguments for this first function call.
243
+
244
+ If we run the code again with the same input
245
+ (`"Where am I located right now?"`), we'll see that `functionResponse`
246
+ is an object filled with location about where the user is located right
247
+ now. In my case, that is Oslo, Norway.
248
+
249
+ ```js
250
+ {ip: "193.212.60.170", network: "193.212.60.0/23", version: "IPv4", city: "Oslo", region: "Oslo County", region_code: "03", country: "NO", country_name: "Norway", country_code: "NO", country_code_iso3: "NOR", country_capital: "Oslo", country_tld: ".no", continent_code: "EU", in_eu: false, postal: "0026", latitude: 59.955, longitude: 10.859, timezone: "Europe/Oslo", utc_offset: "+0200", country_calling_code: "+47", currency: "NOK", currency_name: "Krone", languages: "no,nb,nn,se,fi", country_area: 324220, country_population: 5314336, asn: "AS2119", org: "Telenor Norge AS"}
251
+ ```
252
+
253
+ We'll add this data to a new item in the `messages` array, where we also
254
+ specify the name of the function we called.
255
+
256
+ ```js
257
+ messages.push({
258
+ role: "function",
259
+ name: functionName,
260
+ content: `The result of the last function was this: ${JSON.stringify(
261
+ functionResponse
262
+ )}
263
+ `,
264
+ });
265
+ ```
266
+
267
+ Notice that the `role` is set to `"function"`. This tells OpenAI
268
+ that the `content` parameter contains the result of the function call
269
+ and not the input from the user.
270
+
271
+ At this point, we need to send a new request to OpenAI with this updated
272
+ `messages` array. However, we don’t want to hard code a new function
273
+ call, as our agent might need to go back and forth between itself and
274
+ GPT several times until it has found the final answer for the user.
275
+
276
+ This can be solved in several different ways, e.g. recursion, a
277
+ while-loop, or a for-loop. We'll use a good old for-loop for the sake of
278
+ simplicity.
279
+
280
+ ## Creating the loop
281
+
282
+ At the top of the `agent` function, we'll create a loop that lets us run
283
+ the entire procedure up to five times.
284
+
285
+ If we get back `finish_reason: "tool_calls"` from GPT, we'll just
286
+ push the result of the function call to the `messages` array and jump to
287
+ the next iteration of the loop, triggering a new request.
288
+
289
+ If we get `finish_reason: "stop"` back, then GPT has found a suitable
290
+ answer, so we'll return the function and cancel the loop.
291
+
292
+ ```js
293
+ for (let i = 0; i < 5; i++) {
294
+ const response = await openai.chat.completions.create({
295
+ model: "gpt-4",
296
+ messages: messages,
297
+ tools: tools,
298
+ });
299
+ const { finish_reason, message } = response.choices[0];
300
+
301
+ if (finish_reason === "tool_calls" && message.tool_calls) {
302
+ const functionName = message.tool_calls[0].function.name;
303
+ const functionToCall = availableTools[functionName];
304
+ const functionArgs = JSON.parse(message.tool_calls[0].function.arguments);
305
+ const functionArgsArr = Object.values(functionArgs);
306
+ const functionResponse = await functionToCall.apply(null, functionArgsArr);
307
+
308
+ messages.push({
309
+ role: "function",
310
+ name: functionName,
311
+ content: `
312
+ The result of the last function was this: ${JSON.stringify(
313
+ functionResponse
314
+ )}
315
+ `,
316
+ });
317
+ } else if (finish_reason === "stop") {
318
+ messages.push(message);
319
+ return message.content;
320
+ }
321
+ }
322
+ return "The maximum number of iterations has been met without a suitable answer. Please try again with a more specific input.";
323
+ ```
324
+
325
+ If we don't see a `finish_reason: "stop"` within our five iterations,
326
+ we'll return a message saying we couldn’t find a suitable answer.
327
+
328
+ ## Running the final app
329
+
330
+ At this point, we are ready to try our app! I'll ask the agent to
331
+ suggest some activities based on my location and the current weather.
332
+
333
+ ```js
334
+ const response = await agent(
335
+ "Please suggest some activities based on my location and the current weather."
336
+ );
337
+ console.log(response);
338
+ ```
339
+
340
+ Here's what we see in the console (formatted to make it easier to read):
341
+
342
+ ```js
343
+ Based on your current location in Oslo, Norway and the weather (15°C and snowy),
344
+ here are some activity suggestions:
345
+
346
+ 1. A visit to the Oslo Winter Park for skiing or snowboarding.
347
+ 2. Enjoy a cosy day at a local café or restaurant.
348
+ 3. Visit one of Oslo's many museums. The Fram Museum or Viking Ship Museum offer interesting insights into Norway’s seafaring history.
349
+ 4. Take a stroll in the snowy streets and enjoy the beautiful winter landscape.
350
+ 5. Enjoy a nice book by the fireplace in a local library.
351
+ 6. Take a fjord sightseeing cruise to enjoy the snowy landscapes.
352
+
353
+ Always remember to bundle up and stay warm. Enjoy your day!
354
+ ```
355
+
356
+ If we peak under the hood, and log out `response.choices[0].message` in
357
+ each iteration of the loop, we'll see that GPT has instructed us to use
358
+ both our functions before coming up with an answer.
359
+
360
+ First, it tells us to call the `getLocation` function. Then it tells us
361
+ to call the `getCurrentWeather` function with
362
+ `"longitude": "10.859", "latitude": "59.955"` passed in as the
363
+ arguments. This is data it got back from the first function call we did.
364
+
365
+ ```js
366
+ {"role":"assistant","content":null,"tool_calls":[{"id":"call_Cn1KH8mtHQ2AMbyNwNJTweEP","type":"function","function":{"name":"getLocation","arguments":"{}"}}]}
367
+ {"role":"assistant","content":null,"tool_calls":[{"id":"call_uc1oozJfGTvYEfIzzcsfXfOl","type":"function","function":{"name":"getCurrentWeather","arguments":"{\n\"latitude\": \"10.859\",\n\"longitude\": \"59.955\"\n}"}}]}
368
+ ```
369
+
370
+ You've now built an AI agent using OpenAI functions and the Node.js SDK! If you're looking for an extra challenge, consider enhancing this app. For example, you could add a function that fetches up-to-date information on events and activities in the user's location.
371
+
372
+ Happy coding!
373
+
374
+ <details>
375
+ <summary>Complete code</summary>
376
+
377
+ ```js
378
+ import OpenAI from "openai";
379
+
380
+ const openai = new OpenAI({
381
+ apiKey: process.env.OPENAI_API_KEY,
382
+ dangerouslyAllowBrowser: true,
383
+ });
384
+
385
+ async function getLocation() {
386
+ const response = await fetch("https://ipapi.co/json/");
387
+ const locationData = await response.json();
388
+ return locationData;
389
+ }
390
+
391
+ async function getCurrentWeather(latitude, longitude) {
392
+ const url = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&hourly=apparent_temperature`;
393
+ const response = await fetch(url);
394
+ const weatherData = await response.json();
395
+ return weatherData;
396
+ }
397
+
398
+ const tools = [
399
+ {
400
+ type: "function",
401
+ function: {
402
+ name: "getCurrentWeather",
403
+ description: "Get the current weather in a given location",
404
+ parameters: {
405
+ type: "object",
406
+ properties: {
407
+ latitude: {
408
+ type: "string",
409
+ },
410
+ longitude: {
411
+ type: "string",
412
+ },
413
+ },
414
+ required: ["longitude", "latitude"],
415
+ },
416
+ }
417
+ },
418
+ {
419
+ type: "function",
420
+ function: {
421
+ name: "getLocation",
422
+ description: "Get the user's location based on their IP address",
423
+ parameters: {
424
+ type: "object",
425
+ properties: {},
426
+ },
427
+ }
428
+ },
429
+ ];
430
+
431
+ const availableTools = {
432
+ getCurrentWeather,
433
+ getLocation,
434
+ };
435
+
436
+ const messages = [
437
+ {
438
+ role: "system",
439
+ content: `You are a helpful assistant. Only use the functions you have been provided with.`,
440
+ },
441
+ ];
442
+
443
+ async function agent(userInput) {
444
+ messages.push({
445
+ role: "user",
446
+ content: userInput,
447
+ });
448
+
449
+ for (let i = 0; i < 5; i++) {
450
+ const response = await openai.chat.completions.create({
451
+ model: "gpt-4",
452
+ messages: messages,
453
+ tools: tools,
454
+ });
455
+
456
+ const { finish_reason, message } = response.choices[0];
457
+
458
+ if (finish_reason === "tool_calls" && message.tool_calls) {
459
+ const functionName = message.tool_calls[0].function.name;
460
+ const functionToCall = availableTools[functionName];
461
+ const functionArgs = JSON.parse(message.tool_calls[0].function.arguments);
462
+ const functionArgsArr = Object.values(functionArgs);
463
+ const functionResponse = await functionToCall.apply(
464
+ null,
465
+ functionArgsArr
466
+ );
467
+
468
+ messages.push({
469
+ role: "function",
470
+ name: functionName,
471
+ content: `
472
+ The result of the last function was this: ${JSON.stringify(
473
+ functionResponse
474
+ )}
475
+ `,
476
+ });
477
+ } else if (finish_reason === "stop") {
478
+ messages.push(message);
479
+ return message.content;
480
+ }
481
+ }
482
+ return "The maximum number of iterations has been met without a suitable answer. Please try again with a more specific input.";
483
+ }
484
+
485
+ const response = await agent(
486
+ "Please suggest some activities based on my location and the weather."
487
+ );
488
+
489
+ console.log("response:", response);
490
+ ```
491
+
492
+ </details>
openai-cookbook_md_files/vector_databases/supabase/semantic-search.mdx ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Semantic search using Supabase Vector
2
+
3
+ The purpose of this guide is to demonstrate how to store OpenAI embeddings in [Supabase Vector](https://supabase.com/docs/guides/ai) (Postgres + pgvector) for the purposes of semantic search.
4
+
5
+ [Supabase](https://supabase.com/docs) is an open-source Firebase alternative built on top of [Postgres](https://en.wikipedia.org/wiki/PostgreSQL), a production-grade SQL database. Since Supabase Vector is built on [pgvector](https://github.com/pgvector/pgvector), you can store your embeddings within the same database that holds the rest of your application data. When combined with pgvector's indexing algorithms, vector search remains [fast at large scales](https://supabase.com/blog/increase-performance-pgvector-hnsw).
6
+
7
+ Supabase adds an ecosystem of services and tools to make app development as quick as possible (such as an [auto-generated REST API](https://postgrest.org/)). We'll use these services to store and query embeddings within Postgres.
8
+
9
+ This guide covers:
10
+
11
+ 1. [Setting up your database](#setup-database)
12
+ 2. [Creating a SQL table](#create-a-vector-table) that can store vector data
13
+ 3. [Generating OpenAI embeddings](#generate-openai-embeddings) using OpenAI's JavaScript client
14
+ 4. [Storing the embeddings](#store-embeddings-in-database) in your SQL table using the Supabase JavaScript client
15
+ 5. [Performing semantic search](#semantic-search) over the embeddings using a Postgres function and the Supabase JavaScript client
16
+
17
+ ## Setup database
18
+
19
+ First head over to https://database.new to provision your Supabase database. This will create a Postgres database on the Supabase cloud platform. Alternatively, you can follow the [local development](https://supabase.com/docs/guides/cli/getting-started) options if you prefer to run your database locally using Docker.
20
+
21
+ In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and execute the following SQL to enable pgvector:
22
+
23
+ ```sql
24
+ -- Enable the pgvector extension
25
+ create extension if not exists vector;
26
+ ```
27
+
28
+ > In a production application, the best practice is to use [database migrations](https://supabase.com/docs/guides/cli/local-development#database-migrations) so that all SQL operations are managed within source control. To keep things simple in this guide, we'll execute queries directly in the SQL Editor. If you are building a production app, feel free to move these into a database migration.
29
+
30
+ ## Create a vector table
31
+
32
+ Next we'll create a table to store documents and embeddings. In the SQL Editor, run:
33
+
34
+ ```sql
35
+ create table documents (
36
+ id bigint primary key generated always as identity,
37
+ content text not null,
38
+ embedding vector (1536) not null
39
+ );
40
+ ```
41
+
42
+ Since Supabase is built on Postgres, we're just using regular SQL here. You can modify this table however you like to better fit your application. If you have existing database tables, you can simply add a new `vector` column to the appropriate table.
43
+
44
+ The important piece to understand is the `vector` data type, which is a new data type that became available when we enabled the pgvector extension earlier. The size of the vector (1536 here) represents the number of dimensions in the embedding. Since we're using OpenAI's `text-embedding-3-small` model in this example, we set the vector size to 1536.
45
+
46
+ Let's go ahead and create a vector index on this table so that future queries remain performant as the table grows:
47
+
48
+ ```sql
49
+ create index on documents using hnsw (embedding vector_ip_ops);
50
+ ```
51
+
52
+ This index uses the [HNSW](https://supabase.com/docs/guides/ai/vector-indexes/hnsw-indexes) algorithm to index vectors stored in the `embedding` column, and specifically when using the inner product operator (`<#>`). We'll explain more about this operator later when we implement our match function.
53
+
54
+ Let's also follow security best practices by enabling row level security on the table:
55
+
56
+ ```sql
57
+ alter table documents enable row level security;
58
+ ```
59
+
60
+ This will prevent unauthorized access to this table through the auto-generated REST API (more on this shortly).
61
+
62
+ ## Generate OpenAI embeddings
63
+
64
+ This guide uses JavaScript to generate embeddings, but you can easily modify it to use any [language supported by OpenAI](https://platform.openai.com/docs/libraries).
65
+
66
+ If you are using JavaScript, feel free to use whichever server-side JavaScript runtime that you prefer (Node.js, Deno, Supabase Edge Functions).
67
+
68
+ If you're using Node.js, first install `openai` as a dependency:
69
+
70
+ ```shell
71
+ npm install openai
72
+ ```
73
+
74
+ then import it:
75
+
76
+ ```js
77
+ import OpenAI from "openai";
78
+ ```
79
+
80
+ If you're using Deno or Supabase Edge Functions, you can import `openai` directly from a URL:
81
+
82
+ ```js
83
+ import OpenAI from "https://esm.sh/openai@4";
84
+ ```
85
+
86
+ > In this example we import from https://esm.sh which is a CDN that automatically fetches the respective NPM module for you and serves it over HTTP.
87
+
88
+ Next we'll generate an OpenAI embedding using [`text-embedding-3-small`](https://platform.openai.com/docs/guides/embeddings/embedding-models):
89
+
90
+ ```js
91
+ const openai = new OpenAI();
92
+
93
+ const input = "The cat chases the mouse";
94
+
95
+ const result = await openai.embeddings.create({
96
+ input,
97
+ model: "text-embedding-3-small",
98
+ });
99
+
100
+ const [{ embedding }] = result.data;
101
+ ```
102
+
103
+ Remember that you will need an [OpenAI API key](https://platform.openai.com/api-keys) to interact with the OpenAI API. You can pass this as an environment variable called `OPENAI_API_KEY`, or manually set it when you instantiate your OpenAI client:
104
+
105
+ ```js
106
+ const openai = new OpenAI({
107
+ apiKey: "<openai-api-key>",
108
+ });
109
+ ```
110
+
111
+ _**Remember:** Never hard-code API keys in your code. Best practice is to either store it in a `.env` file and load it using a library like [`dotenv`](https://github.com/motdotla/dotenv) or load it from an external key management system._
112
+
113
+ ## Store embeddings in database
114
+
115
+ Supabase comes with an [auto-generated REST API](https://postgrest.org/) that dynamically builds REST endpoints for each of your tables. This means you don't need to establish a direct Postgres connection to your database - instead you can interact with it simply using by the REST API. This is especially useful in serverless environments that run short-lived processes where re-establishing a database connection every time can be expensive.
116
+
117
+ Supabase comes with a number of [client libraries](https://supabase.com/docs#client-libraries) to simplify interaction with the REST API. In this guide we'll use the [JavaScript client library](https://supabase.com/docs/reference/javascript), but feel free to adjust this to your preferred language.
118
+
119
+ If you're using Node.js, install `@supabase/supabase-js` as a dependency:
120
+
121
+ ```shell
122
+ npm install @supabase/supabase-js
123
+ ```
124
+
125
+ then import it:
126
+
127
+ ```js
128
+ import { createClient } from "@supabase/supabase-js";
129
+ ```
130
+
131
+ If you're using Deno or Supabase Edge Functions, you can import `@supabase/supabase-js` directly from a URL:
132
+
133
+ ```js
134
+ import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
135
+ ```
136
+
137
+ Next we'll instantiate our Supabase client and configure it so that it points to your Supabase project. In this guide we'll store a reference to your Supabase URL and key in a `.env` file, but feel free to modify this based on how your application handles configuration.
138
+
139
+ If you are using Node.js or Deno, add your Supabase URL and service role key to a `.env` file. If you are using the cloud platform, you can find these from your Supabase dashboard [settings page](https://supabase.com/dashboard/project/_/settings/api). If you're running Supabase locally, you can find these by running `npx supabase status` in a terminal.
140
+
141
+ _.env_
142
+
143
+ ```
144
+ SUPABASE_URL=<supabase-url>
145
+ SUPABASE_SERVICE_ROLE_KEY=<supabase-service-role-key>
146
+ ```
147
+
148
+ If you are using Supabase Edge Functions, these environment variables are automatically injected into your function for you so you can skip the above step.
149
+
150
+ Next we'll pull these environment variables into our app.
151
+
152
+ In Node.js, install the `dotenv` dependency:
153
+
154
+ ```shell
155
+ npm install dotenv
156
+ ```
157
+
158
+ And retrieve the environment variables from `process.env`:
159
+
160
+ ```js
161
+ import { config } from "dotenv";
162
+
163
+ // Load .env file
164
+ config();
165
+
166
+ const supabaseUrl = process.env["SUPABASE_URL"];
167
+ const supabaseServiceRoleKey = process.env["SUPABASE_SERVICE_ROLE_KEY"];
168
+ ```
169
+
170
+ In Deno, load the `.env` file using the `dotenv` standard library:
171
+
172
+ ```js
173
+ import { load } from "https://deno.land/std@0.208.0/dotenv/mod.ts";
174
+
175
+ // Load .env file
176
+ const env = await load();
177
+
178
+ const supabaseUrl = env["SUPABASE_URL"];
179
+ const supabaseServiceRoleKey = env["SUPABASE_SERVICE_ROLE_KEY"];
180
+ ```
181
+
182
+ In Supabase Edge Functions, simply load the injected environment variables directly:
183
+
184
+ ```js
185
+ const supabaseUrl = Deno.env.get("SUPABASE_URL");
186
+ const supabaseServiceRoleKey = Deno.env.get("SUPABASE_SERVICE_ROLE_KEY");
187
+ ```
188
+
189
+ Next let's instantiate our `supabase` client:
190
+
191
+ ```js
192
+ const supabase = createClient(supabaseUrl, supabaseServiceRoleKey, {
193
+ auth: { persistSession: false },
194
+ });
195
+ ```
196
+
197
+ From here we use the `supabase` client to insert our text and embedding (generated earlier) into the database:
198
+
199
+ ```js
200
+ const { error } = await supabase.from("documents").insert({
201
+ content: input,
202
+ embedding,
203
+ });
204
+ ```
205
+
206
+ > In production, best practice would be to check the response `error` to see if there were any problems inserting the data and handle it accordingly.
207
+
208
+ ## Semantic search
209
+
210
+ Finally let's perform semantic search over the embeddings in our database. At this point we'll assume your `documents` table has been filled with multiple records that we can search over.
211
+
212
+ Let's create a match function in Postgres that performs the semantic search query. Execute the following in the [SQL Editor](https://supabase.com/dashboard/project/_/sql/new):
213
+
214
+ ```sql
215
+ create function match_documents (
216
+ query_embedding vector (1536),
217
+ match_threshold float,
218
+ )
219
+ returns setof documents
220
+ language plpgsql
221
+ as $$
222
+ begin
223
+ return query
224
+ select *
225
+ from documents
226
+ where documents.embedding <#> query_embedding < -match_threshold
227
+ order by documents.embedding <#> query_embedding;
228
+ end;
229
+ $$;
230
+ ```
231
+
232
+ This function accepts a `query_embedding` which represents the embedding generated from the search query text (more on this shortly). It also accepts a `match_threshold` which specifies how similar the document embeddings have to be in order for `query_embedding` to count as a match.
233
+
234
+ Inside the function we implement the query which does two things:
235
+
236
+ - Filters the documents to only include those who's embeddings match within the above `match_threshold`. Since the `<#>` operator performs the negative inner product (versus positive inner product), we negate the similarity threshold before comparing. This means a `match_threshold` of 1 is most similar, and -1 is most dissimilar.
237
+ - Orders the documents by negative inner product (`<#>`) ascending. This allows us to retrieve documents that match closest first.
238
+
239
+ > Since OpenAI embeddings are normalized, we opted to use inner product (`<#>`) because it is slightly more performant than other operators like cosine distance (`<=>`). It is important to note though this only works because the embeddings are normalized - if they weren't, cosine distance should be used.
240
+
241
+ Now we can call this function from our application using the `supabase.rpc()` method:
242
+
243
+ ```js
244
+ const query = "What does the cat chase?";
245
+
246
+ // First create an embedding on the query itself
247
+ const result = await openai.embeddings.create({
248
+ input: query,
249
+ model: "text-embedding-3-small",
250
+ });
251
+
252
+ const [{ embedding }] = result.data;
253
+
254
+ // Then use this embedding to search for matches
255
+ const { data: documents, error: matchError } = await supabase
256
+ .rpc("match_documents", {
257
+ query_embedding: embedding,
258
+ match_threshold: 0.8,
259
+ })
260
+ .select("content")
261
+ .limit(5);
262
+ ```
263
+
264
+ In this example, we set a match threshold to 0.8. Adjust this threshold based on what works best with your data.
265
+
266
+ Note that since `match_documents` returns a set of `documents`, we can treat this `rpc()` like a regular table query. Specifically this means we can chain additional commands to this query, like `select()` and `limit()`. Here we select just the columns we care about from the `documents` table (`content`), and we limit the number of documents returned (max 5 in this example).
267
+
268
+ At this point you have a list of documents that matched the query based on semantic relationship, ordered by most similar first.
269
+
270
+ ## Next steps
271
+
272
+ You can use this example as the foundation for other semantic search techniques, like retrieval augmented generation (RAG).
273
+
274
+ For more information on OpenAI embeddings, read the [Embedding](https://platform.openai.com/docs/guides/embeddings) docs.
275
+
276
+ For more information on Supabase Vector, read the [AI & Vector](https://supabase.com/docs/guides/ai) docs.
trl_md_files/alignprop_trainer.mdx ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Aligning Text-to-Image Diffusion Models with Reward Backpropagation
2
+
3
+ ## The why
4
+
5
+ If your reward function is differentiable, directly backpropagating gradients from the reward models to the diffusion model is significantly more sample and compute efficient (25x) than doing policy gradient algorithm like DDPO.
6
+ AlignProp does full backpropagation through time, which allows updating the earlier steps of denoising via reward backpropagation.
7
+
8
+ <div style="text-align: center"><img src="https://align-prop.github.io/reward_tuning.png"/></div>
9
+
10
+
11
+ ## Getting started with `examples/scripts/alignprop.py`
12
+
13
+ The `alignprop.py` script is a working example of using the `AlignProp` trainer to finetune a Stable Diffusion model. This example explicitly configures a small subset of the overall parameters associated with the config object (`AlignPropConfig`).
14
+
15
+ **Note:** one A100 GPU is recommended to get this running. For lower memory setting, consider setting truncated_backprop_rand to False. With default settings this will do truncated backpropagation with K=1.
16
+
17
+ Almost every configuration parameter has a default. There is only one commandline flag argument that is required of the user to get things up and running. The user is expected to have a [huggingface user access token](https://huggingface.co/docs/hub/security-tokens) that will be used to upload the model post finetuning to HuggingFace hub. The following bash command is to be entered to get things running
18
+
19
+ ```batch
20
+ python alignprop.py --hf_user_access_token <token>
21
+ ```
22
+
23
+ To obtain the documentation of `stable_diffusion_tuning.py`, please run `python stable_diffusion_tuning.py --help`
24
+
25
+ The following are things to keep in mind (The code checks this for you as well) in general while configuring the trainer (beyond the use case of using the example script)
26
+
27
+ - The configurable randomized truncation range (`--alignprop_config.truncated_rand_backprop_minmax=(0,50)`) the first number should be equal and greater to 0, while the second number should equal or less to the number of diffusion timesteps (sample_num_steps)
28
+ - The configurable truncation backprop absolute step (`--alignprop_config.truncated_backprop_timestep=49`) the number should be less than the number of diffusion timesteps (sample_num_steps), it only matters when truncated_backprop_rand is set to False
29
+
30
+ ## Setting up the image logging hook function
31
+
32
+ Expect the function to be given a dictionary with keys
33
+ ```python
34
+ ['image', 'prompt', 'prompt_metadata', 'rewards']
35
+
36
+ ```
37
+ and `image`, `prompt`, `prompt_metadata`, `rewards`are batched.
38
+ You are free to log however you want the use of `wandb` or `tensorboard` is recommended.
39
+
40
+ ### Key terms
41
+
42
+ - `rewards` : The rewards/score is a numerical associated with the generated image and is key to steering the RL process
43
+ - `prompt` : The prompt is the text that is used to generate the image
44
+ - `prompt_metadata` : The prompt metadata is the metadata associated with the prompt. A situation where this will not be empty is when the reward model comprises of a [`FLAVA`](https://huggingface.co/docs/transformers/model_doc/flava) setup where questions and ground answers (linked to the generated image) are expected with the generated image (See here: https://github.com/kvablack/ddpo-pytorch/blob/main/ddpo_pytorch/rewards.py#L45)
45
+ - `image` : The image generated by the Stable Diffusion model
46
+
47
+ Example code for logging sampled images with `wandb` is given below.
48
+
49
+ ```python
50
+ # for logging these images to wandb
51
+
52
+ def image_outputs_hook(image_data, global_step, accelerate_logger):
53
+ # For the sake of this example, we only care about the last batch
54
+ # hence we extract the last element of the list
55
+ result = {}
56
+ images, prompts, rewards = [image_data['images'],image_data['prompts'],image_data['rewards']]
57
+ for i, image in enumerate(images):
58
+ pil = Image.fromarray(
59
+ (image.cpu().numpy().transpose(1, 2, 0) * 255).astype(np.uint8)
60
+ )
61
+ pil = pil.resize((256, 256))
62
+ result[f"{prompts[i]:.25} | {rewards[i]:.2f}"] = [pil]
63
+ accelerate_logger.log_images(
64
+ result,
65
+ step=global_step,
66
+ )
67
+
68
+ ```
69
+
70
+ ### Using the finetuned model
71
+
72
+ Assuming you've done with all the epochs and have pushed up your model to the hub, you can use the finetuned model as follows
73
+
74
+ ```python
75
+ from diffusers import StableDiffusionPipeline
76
+ pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
77
+ pipeline.to("cuda")
78
+
79
+ pipeline.load_lora_weights('mihirpd/alignprop-trl-aesthetics')
80
+
81
+ prompts = ["squirrel", "crab", "starfish", "whale","sponge", "plankton"]
82
+ results = pipeline(prompts)
83
+
84
+ for prompt, image in zip(prompts,results.images):
85
+ image.save(f"dump/{prompt}.png")
86
+ ```
87
+
88
+ ## Credits
89
+
90
+ This work is heavily influenced by the repo [here](https://github.com/mihirp1998/AlignProp/) and the associated paper [Aligning Text-to-Image Diffusion Models with Reward Backpropagation
91
+ by Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, Katerina Fragkiadaki](https://huggingface.co/papers/2310.03739).
trl_md_files/bco_trainer.mdx ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # BCO Trainer
2
+
3
+ TRL supports the Binary Classifier Optimization (BCO).
4
+ The [BCO](https://huggingface.co/papers/2404.04656) authors train a binary classifier whose logit serves as a reward so that the classifier maps {prompt, chosen completion} pairs to 1 and {prompt, rejected completion} pairs to 0.
5
+ For a full example have a look at [`examples/scripts/bco.py`].
6
+
7
+ ## Expected dataset format
8
+
9
+ The BCO trainer expects a very specific format for the dataset as it does not require pairwise preferences. Since the model will be trained to directly optimize examples that consist of a prompt, model completion, and a label to indicate whether the completion is "good" or "bad", we expect a dataset with the following columns:
10
+
11
+ - `prompt`
12
+ - `completion`
13
+ - `label`
14
+
15
+ for example:
16
+
17
+ ```
18
+ bco_dataset_dict = {
19
+ "prompt": [
20
+ "Hey, hello",
21
+ "How are you",
22
+ "What is your name?",
23
+ "What is your name?",
24
+ "Which is the best programming language?",
25
+ "Which is the best programming language?",
26
+ "Which is the best programming language?",
27
+ ],
28
+ "completion": [
29
+ "hi nice to meet you",
30
+ "leave me alone",
31
+ "I don't have a name",
32
+ "My name is Mary",
33
+ "Python",
34
+ "C++",
35
+ "Java",
36
+ ],
37
+ "label": [
38
+ True,
39
+ False,
40
+ False,
41
+ True,
42
+ True,
43
+ False,
44
+ False,
45
+ ],
46
+ }
47
+ ```
48
+
49
+ where the `prompt` contains the context inputs, `completion` contains the corresponding responses and `label` contains the corresponding flag that indicates if the generated completion is desired (`True`) or undesired (`False`).
50
+ A prompt can have multiple responses and this is reflected in the entries being repeated in the dictionary's value arrays. It is required that the dataset contains at least one desirable and one undesirable completion.
51
+
52
+
53
+ ## Expected model format
54
+ The BCO trainer expects a model of `AutoModelForCausalLM`, compared to PPO that expects `AutoModelForCausalLMWithValueHead` for the value function.
55
+
56
+ ## Using the `BCOTrainer`
57
+
58
+ For a detailed example have a look at the `examples/scripts/bco.py` script. At a high level we need to initialize the `BCOTrainer` with a `model` we wish to train and a reference `ref_model` which we will use to calculate the implicit rewards of the preferred and rejected response.
59
+
60
+ The `beta` refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above. Note that the `model` and `ref_model` need to have the same architecture (ie decoder only or encoder-decoder).
61
+
62
+
63
+
64
+ ```py
65
+ training_args = BCOConfig(
66
+ beta=0.1,
67
+ )
68
+
69
+ bco_trainer = BCOTrainer(
70
+ model,
71
+ model_ref,
72
+ args=training_args,
73
+ train_dataset=train_dataset,
74
+ tokenizer=tokenizer,
75
+ )
76
+ ```
77
+ After this one can then call:
78
+
79
+ ```py
80
+ bco_trainer.train()
81
+ ```
82
+
83
+ ## Underlying Distribution matching (UDM)
84
+
85
+ In practical scenarios, the thumbs-up and thumbs-down datasets are likely to have divergent underlying distributions of prompts.
86
+ Consider an LLM deployed for user feedback: if the model excels in writing tasks but underperforms in coding, the thumbs-up dataset will be dominated by writing-related prompts, while the thumbs-down dataset will contain mostly coding-related prompts.
87
+ If the prompts in your desired and undesired datasets differ a lot, it is useful to enable UDM.
88
+
89
+ Choose an embedding model and tokenizer:
90
+
91
+ ```py
92
+ embedding_model = AutoModel.from_pretrained(your_model_id)
93
+ embedding_tokenizer = AutoTokenizer.from_pretrained(your_model_id)
94
+
95
+ # customize this function depending on your embedding model
96
+ def embed_prompt(input_ids, attention_mask, model):
97
+ outputs = model(input_ids=input_ids, attention_mask=attention_mask)
98
+ return outputs.last_hidden_state.mean(dim=1)
99
+
100
+ embedding_model = Accelerator().prepare_model(self.embedding_model)
101
+ embedding_func = partial(embed_prompt, model=embedding_model)
102
+ ```
103
+
104
+ Set `prompt_sample_size` to defined how many prompts are selected to train the UDM classifier and start the training with the provided embedding function:
105
+
106
+ ```py
107
+ training_args = BCOConfig(
108
+ beta=0.1,
109
+ prompt_sample_size=512,
110
+ )
111
+
112
+ bco_trainer = BCOTrainer(
113
+ model,
114
+ model_ref,
115
+ args=training_args,
116
+ train_dataset=train_dataset,
117
+ tokenizer=tokenizer,
118
+ embedding_func=embedding_func,
119
+ embedding_tokenizer=self.embedding_tokenizer,
120
+ )
121
+
122
+ bco_trainer.train()
123
+ ```
124
+
125
+ ### For Mixture of Experts Models: Enabling the auxiliary loss
126
+
127
+ MOEs are the most efficient if the load is about equally distributed between experts.
128
+ To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
129
+
130
+ This option is enabled by setting `output_router_logits=True` in the model config (e.g. MixtralConfig).
131
+ To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: 0.001).
132
+
133
+ ## BCOTrainer
134
+
135
+ [[autodoc]] BCOTrainer
136
+
137
+ ## BCOConfig
138
+
139
+ [[autodoc]] BCOConfig
trl_md_files/best_of_n.mdx ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Best of N sampling: Alternative ways to get better model output without RL based fine-tuning
2
+
3
+ Within the extras module is the `best-of-n` sampler class that serves as an alternative method of generating better model output.
4
+ As to how it fares against the RL based fine-tuning, please look in the `examples` directory for a comparison example
5
+
6
+ ## Usage
7
+
8
+ To get started quickly, instantiate an instance of the class with a model, a length sampler, a tokenizer and a callable that serves as a proxy reward pipeline that outputs reward scores for input queries
9
+
10
+ ```python
11
+
12
+ from transformers import pipeline, AutoTokenizer
13
+ from trl import AutoModelForCausalLMWithValueHead
14
+ from trl.core import LengthSampler
15
+ from trl.extras import BestOfNSampler
16
+
17
+ ref_model = AutoModelForCausalLMWithValueHead.from_pretrained(ref_model_name)
18
+ reward_pipe = pipeline("sentiment-analysis", model=reward_model, device=device)
19
+ tokenizer = AutoTokenizer.from_pretrained(ref_model_name)
20
+ tokenizer.pad_token = tokenizer.eos_token
21
+
22
+
23
+ # callable that takes a list of raw text and returns a list of corresponding reward scores
24
+ def queries_to_scores(list_of_strings):
25
+ return [output["score"] for output in reward_pipe(list_of_strings)]
26
+
27
+ best_of_n = BestOfNSampler(model, tokenizer, queries_to_scores, length_sampler=output_length_sampler)
28
+
29
+
30
+ ```
31
+
32
+ And assuming you have a list/tensor of tokenized queries, you can generate better output by calling the `generate` method
33
+
34
+ ```python
35
+
36
+ best_of_n.generate(query_tensors, device=device, **gen_kwargs)
37
+
38
+ ```
39
+ The default sample size is 4, but you can change it at the time of instance initialization like so
40
+
41
+ ```python
42
+
43
+ best_of_n = BestOfNSampler(model, tokenizer, queries_to_scores, length_sampler=output_length_sampler, sample_size=8)
44
+
45
+ ```
46
+
47
+ The default output is the result of taking the top scored output for each query, but you can change it to top 2 and so on by passing the `n_candidates` argument at the time of instance initialization
48
+
49
+ ```python
50
+
51
+ best_of_n = BestOfNSampler(model, tokenizer, queries_to_scores, length_sampler=output_length_sampler, n_candidates=2)
52
+
53
+ ```
54
+
55
+ There is the option of setting the generation settings (like `temperature`, `pad_token_id`) at the time of instance creation as opposed to when calling the `generate` method.
56
+ This is done by passing a `GenerationConfig` from the `transformers` library at the time of initialization
57
+
58
+ ```python
59
+
60
+ from transformers import GenerationConfig
61
+
62
+ generation_config = GenerationConfig(min_length= -1, top_k=0.0, top_p= 1.0, do_sample= True, pad_token_id=tokenizer.eos_token_id)
63
+
64
+ best_of_n = BestOfNSampler(model, tokenizer, queries_to_scores, length_sampler=output_length_sampler, generation_config=generation_config)
65
+
66
+ best_of_n.generate(query_tensors, device=device)
67
+
68
+ ```
69
+
70
+ Furthermore, at the time of initialization you can set the seed to control repeatability of the generation process and the number of samples to generate for each query
71
+
72
+
trl_md_files/callbacks.mdx ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Callbacks
2
+
3
+ ## SyncRefModelCallback
4
+
5
+ [[autodoc]] SyncRefModelCallback
6
+
7
+ ## RichProgressCallback
8
+
9
+ [[autodoc]] RichProgressCallback
10
+
11
+ ## WinRateCallback
12
+
13
+ [[autodoc]] WinRateCallback
trl_md_files/clis.mdx ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Command Line Interfaces (CLIs)
2
+
3
+ You can use TRL to fine-tune your Language Model with Supervised Fine-Tuning (SFT) or Direct Policy Optimization (DPO) or even chat with your model using the TRL CLIs.
4
+
5
+ Currently supported CLIs are:
6
+
7
+ - `trl sft`: fine-tune a LLM on a text/instruction dataset
8
+ - `trl dpo`: fine-tune a LLM with DPO on a preference dataset
9
+ - `trl chat`: quickly spin up a LLM fine-tuned for chatting
10
+
11
+ ## Fine-tuning with the CLI
12
+
13
+ Before getting started, pick up a Language Model from Hugging Face Hub. Supported models can be found with the filter "text-generation" within models. Also make sure to pick up a relevant dataset for your task.
14
+
15
+ Before using the `sft` or `dpo` commands make sure to run:
16
+ ```bash
17
+ accelerate config
18
+ ```
19
+ and pick up the right configuration for your training setup (single / multi-GPU, DeepSpeed, etc.). Make sure to complete all steps of `accelerate config` before running any CLI command.
20
+
21
+ We also recommend you passing a YAML config file to configure your training protocol. Below is a simple example of a YAML file that you can use for training your models with `trl sft` command.
22
+
23
+ ```yaml
24
+ model_name_or_path:
25
+ trl-internal-testing/tiny-random-LlamaForCausalLM
26
+ dataset_name:
27
+ imdb
28
+ dataset_text_field:
29
+ text
30
+ report_to:
31
+ none
32
+ learning_rate:
33
+ 0.0001
34
+ lr_scheduler_type:
35
+ cosine
36
+ ```
37
+
38
+ Save that config in a `.yaml` and get started immediately! An example CLI config is available as `examples/cli_configs/example_config.yaml`. Note you can overwrite the arguments from the config file by explicitly passing them to the CLI, e.g. from the root folder:
39
+
40
+ ```bash
41
+ trl sft --config examples/cli_configs/example_config.yaml --output_dir test-trl-cli --lr_scheduler_type cosine_with_restarts
42
+ ```
43
+
44
+ Will force-use `cosine_with_restarts` for `lr_scheduler_type`.
45
+
46
+ ### Supported Arguments
47
+
48
+ We do support all arguments from `transformers.TrainingArguments`, for loading your model, we support all arguments from `~trl.ModelConfig`:
49
+
50
+ [[autodoc]] ModelConfig
51
+
52
+ You can pass any of these arguments either to the CLI or the YAML file.
53
+
54
+ ### Supervised Fine-tuning (SFT)
55
+
56
+ Follow the basic instructions above and run `trl sft --output_dir <output_dir> <*args>`:
57
+
58
+ ```bash
59
+ trl sft --model_name_or_path facebook/opt-125m --dataset_name imdb --output_dir opt-sft-imdb
60
+ ```
61
+
62
+ The SFT CLI is based on the `examples/scripts/sft.py` script.
63
+
64
+ ### Direct Policy Optimization (DPO)
65
+
66
+ To use the DPO CLI, you need to have a dataset in the TRL format such as
67
+
68
+ * TRL's Anthropic HH dataset: https://huggingface.co/datasets/trl-internal-testing/hh-rlhf-helpful-base-trl-style
69
+ * TRL's OpenAI TL;DR summarization dataset: https://huggingface.co/datasets/trl-internal-testing/tldr-preference-trl-style
70
+
71
+ These datasets always have at least three columns `prompt, chosen, rejected`:
72
+
73
+ * `prompt` is a list of strings.
74
+ * `chosen` is the chosen response in [chat format](https://huggingface.co/docs/transformers/main/en/chat_templating)
75
+ * `rejected` is the rejected response [chat format](https://huggingface.co/docs/transformers/main/en/chat_templating)
76
+
77
+
78
+ To do a quick start, you can run the following command:
79
+
80
+ ```bash
81
+ trl dpo --model_name_or_path facebook/opt-125m --output_dir trl-hh-rlhf --dataset_name trl-internal-testing/hh-rlhf-helpful-base-trl-style
82
+ ```
83
+
84
+
85
+ The DPO CLI is based on the `examples/scripts/dpo.py` script.
86
+
87
+
88
+ #### Custom preference dataset
89
+
90
+ Format the dataset into TRL format (you can adapt the `examples/datasets/anthropic_hh.py`):
91
+
92
+ ```bash
93
+ python examples/datasets/anthropic_hh.py --push_to_hub --hf_entity your-hf-org
94
+ ```
95
+
96
+ ## Chat interface
97
+
98
+ The chat CLI lets you quickly load the model and talk to it. Simply run the following:
99
+
100
+ ```bash
101
+ trl chat --model_name_or_path Qwen/Qwen1.5-0.5B-Chat
102
+ ```
103
+
104
+ > [!TIP]
105
+ > To use the chat CLI with the developer installation, you must run `make dev`
106
+ >
107
+
108
+ Note that the chat interface relies on the tokenizer's [chat template](https://huggingface.co/docs/transformers/chat_templating) to format the inputs for the model. Make sure your tokenizer has a chat template defined.
109
+
110
+ Besides talking to the model there are a few commands you can use:
111
+
112
+ - **clear**: clears the current conversation and start a new one
113
+ - **example {NAME}**: load example named `{NAME}` from the config and use it as the user input
114
+ - **set {SETTING_NAME}={SETTING_VALUE};**: change the system prompt or generation settings (multiple settings are separated by a ';').
115
+ - **reset**: same as clear but also resets the generation configs to defaults if they have been changed by **set**
116
+ - **save {SAVE_NAME} (optional)**: save the current chat and settings to file by default to `./chat_history/{MODEL_NAME}/chat_{DATETIME}.yaml` or `{SAVE_NAME}` if provided
117
+ - **exit**: closes the interface
118
+
119
+ The default examples are defined in `examples/scripts/config/default_chat_config.yaml` but you can pass your own with `--config CONFIG_FILE` where you can also specify the default generation parameters.
trl_md_files/cpo_trainer.mdx ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CPO Trainer
2
+
3
+ Contrastive Preference Optimization (CPO) as introduced in the paper [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417) by Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. At a high-level, CPO trains models to
4
+ avoid generating adequate, but not perfect translations in Machine Translation (MT) tasks. However, CPO is a general approximation to the DPO loss and can be applied to other domains like chat.
5
+
6
+ CPO aims to mitigate two fundamental shortcomings of SFT. First, SFT’s methodology of minimizing the discrepancy between predicted outputs and gold-standard references inherently caps model performance at the quality level of the training data. Secondly, SFT lacks a mechanism to prevent the model from rejecting mistakes in translations. The CPO objective is derived from the DPO objective.
7
+
8
+ ## SimPO
9
+ The [SimPO](https://huggingface.co/papers/2405.14734) method is also implemented in the `CPOTrainer`. SimPO is an alternative loss that adds a reward margin, allows for length normalization, and does not use BC regularization. To use this loss, we can use SimPO easily by turning on `loss_type="simpo"` and `cpo_alpha=0` in the `CPOConfig`.
10
+
11
+ ## CPO-SimPO
12
+ We also offer the combined use of CPO and SimPO, which enables more stable training and improved performance. Learn more details at [CPO-SimPO Github](https://github.com/fe1ixxu/CPO_SIMPO). To use this method, simply enable SimPO by setting `loss_type="simpo"` and a non-zero `cpo_alpha` in the CPOConfig.
13
+
14
+ ## Expected dataset format
15
+
16
+ The CPO trainer expects a format identical to the DPO trainer, which should include three entries. These entries should be named as follows:
17
+
18
+ - `prompt`
19
+ - `chosen`
20
+ - `rejected`
21
+
22
+ for example:
23
+
24
+ ```py
25
+ cpo_dataset_dict = {
26
+ "prompt": [
27
+ "hello",
28
+ "how are you",
29
+ "What is your name?",
30
+ "What is your name?",
31
+ "Which is the best programming language?",
32
+ "Which is the best programming language?",
33
+ "Which is the best programming language?",
34
+ ],
35
+ "chosen": [
36
+ "hi nice to meet you",
37
+ "I am fine",
38
+ "My name is Mary",
39
+ "My name is Mary",
40
+ "Python",
41
+ "Python",
42
+ "Java",
43
+ ],
44
+ "rejected": [
45
+ "leave me alone",
46
+ "I am not fine",
47
+ "Whats it to you?",
48
+ "I dont have a name",
49
+ "Javascript",
50
+ "C++",
51
+ "C++",
52
+ ],
53
+ }
54
+ ```
55
+ where the `prompt` contains the context inputs, `chosen` contains the corresponding chosen responses and `rejected` contains the corresponding negative (rejected) responses. As can be seen a prompt can have multiple responses and this is reflected in the entries being repeated in the dictionary's value arrays.
56
+
57
+ ## Expected model format
58
+ The CPO trainer expects a model of `AutoModelForCausalLM`, compared to PPO that expects `AutoModelForCausalLMWithValueHead` for the value function.
59
+
60
+ ## Using the `CPOTrainer`
61
+ For a detailed example have a look at the `examples/scripts/cpo.py` script. At a high level we need to initialize the `CPOTrainer` with a `model` we wish to train. **Note that CPOTrainer eliminates the need to use the reference model, simplifying the optimization process.** The `beta` refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above.
62
+
63
+ ```py
64
+ cpo_config = CPOConfig(
65
+ beta=0.1,
66
+ )
67
+
68
+ cpo_trainer = CPOTrainer(
69
+ model,
70
+ args=cpo_config,
71
+ train_dataset=train_dataset,
72
+ tokenizer=tokenizer,
73
+ )
74
+ ```
75
+ After this one can then call:
76
+
77
+ ```py
78
+ cpo_trainer.train()
79
+ ```
80
+
81
+ ## Loss functions
82
+
83
+ Given the preference data, the `CPOTrainer` uses the sigmoid loss on the normalized likelihood via the `logsigmoid` to fit a logistic regression.
84
+
85
+ The [RSO](https://huggingface.co/papers/2309.06657) authors propose to use a hinge loss on the normalized likelihood from the [SLiC](https://huggingface.co/papers/2305.10425) paper. The `CPOTrainer` can be switched to this loss via the `loss_type="hinge"` argument and the `beta` in this case is the reciprocal of the margin.
86
+
87
+ The [IPO](https://huggingface.co/papers/2310.12036) authors provide a deeper theoretical understanding of the CPO algorithms and identify an issue with overfitting and propose an alternative loss which can be used via the `loss_type="ipo"` argument to the trainer. Note that the `beta` parameter is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair and thus the smaller the `beta` the larger this gaps is. As per the paper the loss is averaged over log-likelihoods of the completion (unlike CPO which is summed only).
88
+
89
+ ### For Mixture of Experts Models: Enabling the auxiliary loss
90
+
91
+ MOEs are the most efficient if the load is about equally distributed between experts.
92
+ To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
93
+
94
+ This option is enabled by setting `output_router_logits=True` in the model config (e.g. MixtralConfig).
95
+ To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: 0.001).
96
+
97
+ ## Logging
98
+
99
+ While training and evaluating we record the following reward metrics:
100
+
101
+ * `rewards/chosen`: the mean log probabilities of the policy model for the chosen responses scaled by beta
102
+ * `rewards/rejected`: the mean log probabilities of the policy model for the rejected responses scaled by beta
103
+ * `rewards/accuracies`: mean of how often the chosen rewards are > than the corresponding rejected rewards
104
+ * `rewards/margins`: the mean difference between the chosen and corresponding rejected rewards
105
+ * `nll_loss`: the mean negative log likelihood loss of the policy model for the chosen responses
106
+
107
+ ## CPOTrainer
108
+
109
+ [[autodoc]] CPOTrainer
110
+
111
+ ## CPOConfig
112
+
113
+ [[autodoc]] CPOConfig