Dataset Viewer
id
stringlengths 14
16
| text
stringlengths 45
2.73k
| source
stringlengths 49
114
|
---|---|---|
f9aab5578669-0 | .rst
.pdf
LangChain Gallery
Contents
Open Source
Misc. Colab Notebooks
Proprietary
LangChain Gallery#
Lots of people have built some pretty awesome stuff with LangChain.
This is a collection of our favorites.
If you see any other demos that you think we should highlight, be sure to let us know!
Open Source#
HowDoI.ai
This is an experiment in building a large-language-model-backed chatbot. It can hold a conversation, remember previous comments/questions,
and answer all types of queries (history, web search, movie data, weather, news, and more).
YouTube Transcription QA with Sources
An end-to-end example of doing question answering on YouTube transcripts, returning the timestamps as sources to legitimize the answer.
QA Slack Bot
This application is a Slack Bot that uses Langchain and OpenAI’s GPT3 language model to provide domain specific answers. You provide the documents.
ThoughtSource
A central, open resource and community around data and tools related to chain-of-thought reasoning in large language models.
LLM Strategy
This Python package adds a decorator llm_strategy that connects to an LLM (such as OpenAI’s GPT-3) and uses the LLM to “implement” abstract methods in interface classes. It does this by forwarding requests to the LLM and converting the responses back to Python data using Python’s @dataclasses.
Zero-Shot Corporate Lobbyist
A notebook showing how to use GPT to help with the work of a corporate lobbyist.
Dagster Documentation ChatBot
A jupyter notebook demonstrating how you could create a semantic search engine on documents in one of your Google Folders
Google Folder Semantic Search
Build a GitHub support bot with GPT3, LangChain, and Python.
Talk With Wind
Record sounds of anything (birds, wind, fire, train station) and chat with it. | https://python.langchain.com/en/latest/gallery.html |
f9aab5578669-1 | Record sounds of anything (birds, wind, fire, train station) and chat with it.
ChatGPT LangChain
This simple application demonstrates a conversational agent implemented with OpenAI GPT-3.5 and LangChain. When necessary, it leverages tools for complex math, searching the internet, and accessing news and weather.
GPT Math Techniques
A Hugging Face spaces project showing off the benefits of using PAL for math problems.
GPT Political Compass
Measure the political compass of GPT.
Notion Database Question-Answering Bot
Open source GitHub project shows how to use LangChain to create a chatbot that can answer questions about an arbitrary Notion database.
LlamaIndex
LlamaIndex (formerly GPT Index) is a project consisting of a set of data structures that are created using GPT-3 and can be traversed using GPT-3 in order to answer queries.
Grover’s Algorithm
Leveraging Qiskit, OpenAI and LangChain to demonstrate Grover’s algorithm
QNimGPT
A chat UI to play Nim, where a player can select an opponent, either a quantum computer or an AI
ReAct TextWorld
Leveraging the ReActTextWorldAgent to play TextWorld with an LLM!
Fact Checker
This repo is a simple demonstration of using LangChain to do fact-checking with prompt chaining.
DocsGPT
Answer questions about the documentation of any project
Misc. Colab Notebooks#
Wolfram Alpha in Conversational Agent
Give ChatGPT a WolframAlpha neural implant
Tool Updates in Agents
Agent improvements (6th Jan 2023)
Conversational Agent with Tools (Langchain AGI)
Langchain AGI (23rd Dec 2022)
Proprietary#
Daimon
A chat-based AI personal assistant with long-term memory about you. | https://python.langchain.com/en/latest/gallery.html |
f9aab5578669-2 | Daimon
A chat-based AI personal assistant with long-term memory about you.
Summarize any file with AI
Summarize not only long docs, interview audio or video files quickly, but also entire websites and YouTube videos. Share or download your generated summaries to collaborate with others, or revisit them at any time! Bonus: @anysummary on Twitter will also summarize any thread it is tagged in.
AI Assisted SQL Query Generator
An app to write SQL using natural language, and execute against real DB.
Clerkie
Stack Tracing QA Bot to help debug complex stack tracing (especially the ones that go multi-function/file deep).
Sales Email Writer
By Raza Habib, this demo utilizes LangChain + SerpAPI + HumanLoop to write sales emails. Give it a company name and a person, this application will use Google Search (via SerpAPI) to get more information on the company and the person, and then write them a sales message.
Question-Answering on a Web Browser
By Zahid Khawaja, this demo utilizes question answering to answer questions about a given website. A followup added this for YouTube videos, and then another followup added it for Wikipedia.
Mynd
A journaling app for self-care that uses AI to uncover insights and patterns over time.
previous
Glossary
next
Deployments
Contents
Open Source
Misc. Colab Notebooks
Proprietary
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/gallery.html |
9c3a329fce81-0 | .md
.pdf
Glossary
Contents
Chain of Thought Prompting
Action Plan Generation
ReAct Prompting
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
Glossary#
This is a collection of terminology commonly used when developing LLM applications.
It contains reference to external papers or sources where the concept was first introduced,
as well as to places in LangChain where the concept is used.
Chain of Thought Prompting#
A prompting technique used to encourage the model to generate a series of intermediate reasoning steps.
A less formal way to induce this behavior is to include “Let’s think step-by-step” in the prompt.
Resources:
Chain-of-Thought Paper
Step-by-Step Paper
Action Plan Generation#
A prompt usage that uses a language model to generate actions to take.
The results of these actions can then be fed back into the language model to generate a subsequent action.
Resources:
WebGPT Paper
SayCan Paper
ReAct Prompting#
A prompting technique that combines Chain-of-Thought prompting with action plan generation.
This induces the to model to think about what action to take, then take it.
Resources:
Paper
LangChain Example
Self-ask#
A prompting method that builds on top of chain-of-thought prompting.
In this method, the model explicitly asks itself follow-up questions, which are then answered by an external search engine.
Resources:
Paper
LangChain Example
Prompt Chaining#
Combining multiple LLM calls together, with the output of one-step being the input to the next.
Resources:
PromptChainer Paper
Language Model Cascades
ICE Primer Book
Socratic Models
Memetic Proxy# | https://python.langchain.com/en/latest/glossary.html |
9c3a329fce81-1 | Language Model Cascades
ICE Primer Book
Socratic Models
Memetic Proxy#
Encouraging the LLM to respond in a certain way framing the discussion in a context that the model knows of and that will result in that type of response. For example, as a conversation between a student and a teacher.
Resources:
Paper
Self Consistency#
A decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
Is most effective when combined with Chain-of-thought prompting.
Resources:
Paper
Inception#
Also called “First Person Instruction”.
Encouraging the model to think a certain way by including the start of the model’s response in the prompt.
Resources:
Example
MemPrompt#
MemPrompt maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes.
Resources:
Paper
previous
Zilliz
next
LangChain Gallery
Contents
Chain of Thought Prompting
Action Plan Generation
ReAct Prompting
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/glossary.html |
c49294875bdb-0 | Search
Error
Please activate JavaScript to enable the search functionality.
Ctrl+K
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/search.html |
fdd8bca3df51-0 | .md
.pdf
YouTube
Contents
Introduction to LangChain with Harrison Chase, creator of LangChain
Tutorials
Videos (sorted by views)
YouTube#
This is a collection of LangChain tutorials and videos on YouTube.
Introduction to LangChain with Harrison Chase, creator of LangChain#
Building the Future with LLMs, LangChain, & Pinecone by Pinecone
LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36 by Weaviate • Vector Database
LangChain Demo + Q&A with Harrison Chase by Full Stack Deep Learning
Tutorials#
LangChain Crash Course - Build apps with language models by Patrick Loeber
LangChain for Gen AI and LLMs by James Briggs:
#1 Getting Started with GPT-3 vs. Open Source LLMs
#2 Prompt Templates for GPT 3.5 and other LLMs
#3 LLM Chains using GPT 3.5 and other LLMs
#4 Chatbot Memory for Chat-GPT, Davinci + other LLMs
#5 Chat with OpenAI in LangChain
#6 LangChain Agents Deep Dive with GPT 3.5
Prompt Engineering with OpenAI’s GPT-3 and other LLMs
LangChain 101 by Data Independent:
What Is LangChain? - LangChain + ChatGPT Overview
Quickstart Guide
Beginner Guide To 7 Essential Concepts
OpenAI + Wolfram Alpha
Ask Questions On Your Custom (or Private) Files
Connect Google Drive Files To OpenAI
YouTube Transcripts + OpenAI
Question A 300 Page Book (w/ OpenAI + Pinecone)
Workaround OpenAI's Token Limit With Chain Types
Build Your Own OpenAI + LangChain Web App in 23 Minutes
Working With The New ChatGPT API | https://python.langchain.com/en/latest/youtube.html |
fdd8bca3df51-1 | Working With The New ChatGPT API
OpenAI + LangChain Wrote Me 100 Custom Sales Emails
Structured Output From OpenAI (Clean Dirty Data)
Connect OpenAI To +5,000 Tools (LangChain + Zapier)
Use LLMs To Extract Data From Text (Expert Mode)
LangChain How to and guides by Sam Witteveen:
LangChain Basics - LLMs & PromptTemplates with Colab
LangChain Basics - Tools and Chains
ChatGPT API Announcement & Code Walkthrough with LangChain
Conversations with Memory (explanation & code walkthrough)
Chat with Flan20B
Using Hugging Face Models locally (code walkthrough)
PAL : Program-aided Language Models with LangChain code
Building a Summarization System with LangChain and GPT-3 - Part 1
Building a Summarization System with LangChain and GPT-3 - Part 2
Microsoft’s Visual ChatGPT using LangChain
LangChain Agents - Joining Tools and Chains with Decisions
Comparing LLMs with LangChain
Using Constitutional AI in LangChain
Talking to Alpaca with LangChain - Creating an Alpaca Chatbot
Talk to your CSV & Excel with LangChain
BabyAGI: Discover the Power of Task-Driven Autonomous Agents!
Improve your BabyAGI with LangChain
LangChain by Prompt Engineering:
LangChain Crash Course - All You Need to Know to Build Powerful Apps with LLMs
Working with MULTIPLE PDF Files in LangChain: ChatGPT for your Data
ChatGPT for YOUR OWN PDF files with LangChain
Talk to YOUR DATA without OpenAI APIs: LangChain
LangChain by Chat with data
LangChain Beginner’s Tutorial for Typescript/Javascript
GPT-4 Tutorial: How to Chat With Multiple PDF Files (~1000 pages of Tesla’s 10-K Annual Reports) | https://python.langchain.com/en/latest/youtube.html |
fdd8bca3df51-2 | GPT-4 & LangChain Tutorial: How to Chat With A 56-Page PDF Document (w/Pinecone)
Videos (sorted by views)#
Building AI LLM Apps with LangChain (and more?) - LIVE STREAM by Nicholas Renotte
First look - ChatGPT + WolframAlpha (GPT-3.5 and Wolfram|Alpha via LangChain by James Weaver) by Dr Alan D. Thompson
LangChain explained - The hottest new Python framework by AssemblyAI
Chatbot with INFINITE MEMORY using OpenAI & Pinecone - GPT-3, Embeddings, ADA, Vector DB, Semantic by David Shapiro ~ AI
LangChain for LLMs is… basically just an Ansible playbook by David Shapiro ~ AI
Build your own LLM Apps with LangChain & GPT-Index by 1littlecoder
BabyAGI - New System of Autonomous AI Agents with LangChain by 1littlecoder
Run BabyAGI with Langchain Agents (with Python Code) by 1littlecoder
How to Use Langchain With Zapier | Write and Send Email with GPT-3 | OpenAI API Tutorial by StarMorph AI
Use Your Locally Stored Files To Get Response From GPT - OpenAI | Langchain | Python by Shweta Lodha
Langchain JS | How to Use GPT-3, GPT-4 to Reference your own Data | OpenAI Embeddings Intro by StarMorph AI
The easiest way to work with large language models | Learn LangChain in 10min by Sophia Yang
4 Autonomous AI Agents: “Westworld” simulation BabyAGI, AutoGPT, Camel, LangChain by Sophia Yang
AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT by tylerwhatsgood
Weaviate + LangChain for LLM apps presented by Erika Cardenas by Weaviate • Vector Database | https://python.langchain.com/en/latest/youtube.html |
fdd8bca3df51-3 | Analyze Custom CSV Data with GPT-4 using Langchain by Venelin Valkov
Langchain Overview - How to Use Langchain & ChatGPT by Python In Office
Custom langchain Agent & Tools with memory. Turn any Python function into langchain tool with Gpt 3 by echohive
ChatGPT with any YouTube video using langchain and chromadb by echohive
How to Talk to a PDF using LangChain and ChatGPT by Automata Learning Lab
Langchain Document Loaders Part 1: Unstructured Files by Merk
LangChain - Prompt Templates (what all the best prompt engineers use) by Nick Daigler
LangChain. Crear aplicaciones Python impulsadas por GPT by Jesús Conde
Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial by Rachel Woods
BabyAGI + GPT-4 Langchain Agent with Internet Access by tylerwhatsgood
Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI by Arnoldas Kemeklis
Get Started with LangChain in Node.js by Developers Digest
LangChain + OpenAI tutorial: Building a Q&A system w/ own text data by Samuel Chan
Langchain + Zapier Agent by Merk
Connecting the Internet with ChatGPT (LLMs) using Langchain And Answers Your Questions by Kamalraj M M
Build More Powerful LLM Applications for Business’s with LangChain (Beginners Guide) by No Code Blackbox
previous
Tracing
Contents
Introduction to LangChain with Harrison Chase, creator of LangChain
Tutorials
Videos (sorted by views)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/youtube.html |
e88177a46e16-0 | .ipynb
.pdf
Model Comparison
Model Comparison#
Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way.
LangChain provides the concept of a ModelLaboratory to test out and try different models.
from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, PromptTemplate
from langchain.model_laboratory import ModelLaboratory
llms = [
OpenAI(temperature=0),
Cohere(model="command-xlarge-20221108", max_tokens=20, temperature=0),
HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":1})
]
model_lab = ModelLaboratory.from_llms(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
Flamingos are pink.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
Pink
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
pink | https://python.langchain.com/en/latest/model_laboratory.html |
e88177a46e16-1 | pink
prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"])
model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt)
model_lab_with_prompt.compare("New York")
Input:
New York
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
The capital of New York is Albany.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
The capital of New York is Albany.
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
st john s
from langchain import SelfAskWithSearchChain, SerpAPIWrapper
open_ai_llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_chain=search, verbose=True)
cohere_llm = Cohere(temperature=0, model="command-xlarge-20221108")
search = SerpAPIWrapper()
self_ask_with_search_cohere = SelfAskWithSearchChain(llm=cohere_llm, search_chain=search, verbose=True)
chains = [self_ask_with_search_openai, self_ask_with_search_cohere]
names = [str(open_ai_llm), str(cohere_llm)] | https://python.langchain.com/en/latest/model_laboratory.html |
e88177a46e16-2 | names = [str(open_ai_llm), str(cohere_llm)]
model_lab = ModelLaboratory(chains, names=names)
model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?")
Input:
What is the hometown of the reigning men's U.S. Open champion?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain.
So the final answer is: El Palmar, Spain
> Finished chain.
So the final answer is: El Palmar, Spain
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
So the final answer is:
Carlos Alcaraz
> Finished chain.
So the final answer is:
Carlos Alcaraz
By Harrison Chase | https://python.langchain.com/en/latest/model_laboratory.html |
e88177a46e16-3 | So the final answer is:
Carlos Alcaraz
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/model_laboratory.html |
01759e4659c5-0 | .rst
.pdf
API References
API References#
All of LangChain’s reference documentation, in one place.
Full documentation on all methods, classes, and APIs in LangChain.
Prompts
LLMs
Utilities
Chains
Agents
previous
Integrations
next
Utilities
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/reference.html |
b9563f5ad006-0 | Index
_
| A
| B
| C
| D
| E
| F
| G
| H
| I
| J
| K
| L
| M
| N
| O
| P
| Q
| R
| S
| T
| U
| V
| W
| Z
_
__call__() (langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
A | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-1 | (langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
A
aadd_documents() (langchain.vectorstores.VectorStore method)
aadd_texts() (langchain.vectorstores.VectorStore method)
aapply() (langchain.chains.LLMChain method)
aapply_and_parse() (langchain.chains.LLMChain method)
add() (langchain.docstore.InMemoryDocstore method)
add_documents() (langchain.vectorstores.VectorStore method)
add_embeddings() (langchain.vectorstores.FAISS method)
add_example() (langchain.prompts.example_selector.LengthBasedExampleSelector method)
(langchain.prompts.example_selector.SemanticSimilarityExampleSelector method)
add_texts() (langchain.vectorstores.Annoy method)
(langchain.vectorstores.AtlasDB method)
(langchain.vectorstores.Chroma method)
(langchain.vectorstores.DeepLake method)
(langchain.vectorstores.ElasticVectorSearch method)
(langchain.vectorstores.FAISS method)
(langchain.vectorstores.Milvus method)
(langchain.vectorstores.OpenSearchVectorSearch method)
(langchain.vectorstores.Pinecone method)
(langchain.vectorstores.Qdrant method)
(langchain.vectorstores.SupabaseVectorStore method)
(langchain.vectorstores.VectorStore method)
(langchain.vectorstores.Weaviate method)
add_vectors() (langchain.vectorstores.SupabaseVectorStore method)
afrom_documents() (langchain.vectorstores.VectorStore class method)
afrom_texts() (langchain.vectorstores.VectorStore class method)
agenerate() (langchain.chains.LLMChain method)
(langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-2 | (langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
agenerate_prompt() (langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-3 | (langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
agent (langchain.agents.AgentExecutor attribute)
(langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
AgentType (class in langchain.agents)
ai_prefix (langchain.agents.ConversationalAgent attribute)
aiosession (langchain.serpapi.SerpAPIWrapper attribute)
(langchain.utilities.searx_search.SearxSearchWrapper attribute)
aleph_alpha_api_key (langchain.llms.AlephAlpha attribute)
allowed_special (langchain.llms.AzureOpenAI attribute)
(langchain.llms.OpenAIChat attribute)
(langchain.llms.PromptLayerOpenAIChat attribute)
allowed_tools (langchain.agents.Agent attribute)
amax_marginal_relevance_search() (langchain.vectorstores.VectorStore method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-4 | amax_marginal_relevance_search() (langchain.vectorstores.VectorStore method)
amax_marginal_relevance_search_by_vector() (langchain.vectorstores.VectorStore method)
Annoy (class in langchain.vectorstores)
answers (langchain.utilities.searx_search.SearxResults property)
api_answer_chain (langchain.chains.APIChain attribute)
api_docs (langchain.chains.APIChain attribute)
api_operation (langchain.chains.OpenAPIEndpointChain attribute)
api_request_chain (langchain.chains.APIChain attribute)
(langchain.chains.OpenAPIEndpointChain attribute)
api_response_chain (langchain.chains.OpenAPIEndpointChain attribute)
api_url (langchain.llms.StochasticAI attribute)
aplan() (langchain.agents.Agent method)
(langchain.agents.BaseMultiActionAgent method)
(langchain.agents.BaseSingleActionAgent method)
(langchain.agents.LLMSingleActionAgent method)
apply() (langchain.chains.LLMChain method)
apply_and_parse() (langchain.chains.LLMChain method)
apredict() (langchain.chains.LLMChain method)
apredict_and_parse() (langchain.chains.LLMChain method)
aprep_prompts() (langchain.chains.LLMChain method)
are_all_true_prompt (langchain.chains.LLMSummarizationCheckerChain attribute)
aresults() (langchain.utilities.searx_search.SearxSearchWrapper method)
args (langchain.agents.Tool property)
arun() (langchain.serpapi.SerpAPIWrapper method)
(langchain.utilities.searx_search.SearxSearchWrapper method)
as_retriever() (langchain.vectorstores.VectorStore method)
asimilarity_search() (langchain.vectorstores.VectorStore method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-5 | asimilarity_search() (langchain.vectorstores.VectorStore method)
asimilarity_search_by_vector() (langchain.vectorstores.VectorStore method)
AtlasDB (class in langchain.vectorstores)
atransform_documents() (langchain.text_splitter.TextSplitter method)
B
bad_words (langchain.llms.NLPCloud attribute)
base_embeddings (langchain.chains.HypotheticalDocumentEmbedder attribute)
base_url (langchain.llms.AI21 attribute)
(langchain.llms.ForefrontAI attribute)
(langchain.llms.Writer attribute)
batch_size (langchain.llms.AzureOpenAI attribute)
beam_search_diversity_rate (langchain.llms.Writer attribute)
beam_width (langchain.llms.Writer attribute)
best_of (langchain.llms.AlephAlpha attribute)
(langchain.llms.AzureOpenAI attribute)
C
cache_folder (langchain.embeddings.HuggingFaceEmbeddings attribute)
(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)
callback_manager (langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
categories (langchain.utilities.searx_search.SearxSearchWrapper attribute)
chain (langchain.chains.ConstitutionalChain attribute)
chains (langchain.chains.SequentialChain attribute)
(langchain.chains.SimpleSequentialChain attribute)
CharacterTextSplitter (class in langchain.text_splitter)
CHAT_CONVERSATIONAL_REACT_DESCRIPTION (langchain.agents.AgentType attribute)
CHAT_ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute)
check_assertions_prompt (langchain.chains.LLMCheckerChain attribute)
(langchain.chains.LLMSummarizationCheckerChain attribute)
Chroma (class in langchain.vectorstores) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-6 | Chroma (class in langchain.vectorstores)
CHUNK_LEN (langchain.llms.RWKV attribute)
chunk_size (langchain.embeddings.OpenAIEmbeddings attribute)
client (langchain.llms.Petals attribute)
combine_docs_chain (langchain.chains.AnalyzeDocumentChain attribute)
combine_documents_chain (langchain.chains.MapReduceChain attribute)
combine_embeddings() (langchain.chains.HypotheticalDocumentEmbedder method)
completion_bias_exclusion_first_token_only (langchain.llms.AlephAlpha attribute)
compress_to_size (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)
constitutional_principles (langchain.chains.ConstitutionalChain attribute)
construct() (langchain.llms.AI21 class method)
(langchain.llms.AlephAlpha class method)
(langchain.llms.Anthropic class method)
(langchain.llms.AzureOpenAI class method)
(langchain.llms.Banana class method)
(langchain.llms.CerebriumAI class method)
(langchain.llms.Cohere class method)
(langchain.llms.DeepInfra class method)
(langchain.llms.ForefrontAI class method)
(langchain.llms.GooseAI class method)
(langchain.llms.GPT4All class method)
(langchain.llms.HuggingFaceEndpoint class method)
(langchain.llms.HuggingFaceHub class method)
(langchain.llms.HuggingFacePipeline class method)
(langchain.llms.LlamaCpp class method)
(langchain.llms.Modal class method)
(langchain.llms.NLPCloud class method)
(langchain.llms.OpenAI class method)
(langchain.llms.OpenAIChat class method)
(langchain.llms.Petals class method)
(langchain.llms.PromptLayerOpenAI class method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-7 | (langchain.llms.PromptLayerOpenAI class method)
(langchain.llms.PromptLayerOpenAIChat class method)
(langchain.llms.Replicate class method)
(langchain.llms.RWKV class method)
(langchain.llms.SagemakerEndpoint class method)
(langchain.llms.SelfHostedHuggingFaceLLM class method)
(langchain.llms.SelfHostedPipeline class method)
(langchain.llms.StochasticAI class method)
(langchain.llms.Writer class method)
content_handler (langchain.embeddings.SagemakerEndpointEmbeddings attribute)
(langchain.llms.SagemakerEndpoint attribute)
CONTENT_KEY (langchain.vectorstores.Qdrant attribute)
contextual_control_threshold (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)
(langchain.llms.AlephAlpha attribute)
control_log_additive (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)
(langchain.llms.AlephAlpha attribute)
CONVERSATIONAL_REACT_DESCRIPTION (langchain.agents.AgentType attribute)
copy() (langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-8 | (langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
coroutine (langchain.agents.Tool attribute)
countPenalty (langchain.llms.AI21 attribute)
create_assertions_prompt (langchain.chains.LLMSummarizationCheckerChain attribute)
create_csv_agent() (in module langchain.agents)
create_documents() (langchain.text_splitter.TextSplitter method)
create_draft_answer_prompt (langchain.chains.LLMCheckerChain attribute)
create_index() (langchain.vectorstores.AtlasDB method)
create_json_agent() (in module langchain.agents)
create_llm_result() (langchain.llms.AzureOpenAI method)
(langchain.llms.OpenAI method)
(langchain.llms.PromptLayerOpenAI method)
create_openapi_agent() (in module langchain.agents)
create_outputs() (langchain.chains.LLMChain method)
create_pandas_dataframe_agent() (in module langchain.agents)
create_prompt() (langchain.agents.Agent class method)
(langchain.agents.ConversationalAgent class method)
(langchain.agents.ConversationalChatAgent class method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-9 | (langchain.agents.ConversationalChatAgent class method)
(langchain.agents.ReActTextWorldAgent class method)
(langchain.agents.ZeroShotAgent class method)
create_sql_agent() (in module langchain.agents)
create_vectorstore_agent() (in module langchain.agents)
create_vectorstore_router_agent() (in module langchain.agents)
credentials_profile_name (langchain.embeddings.SagemakerEndpointEmbeddings attribute)
(langchain.llms.SagemakerEndpoint attribute)
critique_chain (langchain.chains.ConstitutionalChain attribute)
D
database (langchain.chains.SQLDatabaseChain attribute)
decider_chain (langchain.chains.SQLDatabaseSequentialChain attribute)
DeepLake (class in langchain.vectorstores)
delete() (langchain.vectorstores.DeepLake method)
delete_collection() (langchain.vectorstores.Chroma method)
delete_dataset() (langchain.vectorstores.DeepLake method)
deployment_name (langchain.llms.AzureOpenAI attribute)
description (langchain.agents.Tool attribute)
deserialize_json_input() (langchain.chains.OpenAPIEndpointChain method)
device (langchain.llms.SelfHostedHuggingFaceLLM attribute)
dict() (langchain.agents.BaseMultiActionAgent method)
(langchain.agents.BaseSingleActionAgent method)
(langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-10 | (langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
(langchain.prompts.BasePromptTemplate method)
(langchain.prompts.FewShotPromptTemplate method)
(langchain.prompts.FewShotPromptWithTemplates method)
disallowed_special (langchain.llms.AzureOpenAI attribute)
(langchain.llms.OpenAIChat attribute)
(langchain.llms.PromptLayerOpenAIChat attribute)
do_sample (langchain.llms.NLPCloud attribute)
(langchain.llms.Petals attribute)
E
early_stopping (langchain.llms.NLPCloud attribute)
early_stopping_method (langchain.agents.AgentExecutor attribute)
(langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
echo (langchain.llms.AlephAlpha attribute)
(langchain.llms.GPT4All attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-11 | (langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
ElasticVectorSearch (class in langchain.vectorstores)
embed_documents() (langchain.chains.HypotheticalDocumentEmbedder method)
(langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding method)
(langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding method)
(langchain.embeddings.CohereEmbeddings method)
(langchain.embeddings.FakeEmbeddings method)
(langchain.embeddings.HuggingFaceEmbeddings method)
(langchain.embeddings.HuggingFaceHubEmbeddings method)
(langchain.embeddings.HuggingFaceInstructEmbeddings method)
(langchain.embeddings.LlamaCppEmbeddings method)
(langchain.embeddings.OpenAIEmbeddings method)
(langchain.embeddings.SagemakerEndpointEmbeddings method)
(langchain.embeddings.SelfHostedEmbeddings method)
(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings method)
(langchain.embeddings.TensorflowHubEmbeddings method)
embed_instruction (langchain.embeddings.HuggingFaceInstructEmbeddings attribute)
(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute)
embed_query() (langchain.chains.HypotheticalDocumentEmbedder method)
(langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding method)
(langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding method)
(langchain.embeddings.CohereEmbeddings method)
(langchain.embeddings.FakeEmbeddings method)
(langchain.embeddings.HuggingFaceEmbeddings method)
(langchain.embeddings.HuggingFaceHubEmbeddings method)
(langchain.embeddings.HuggingFaceInstructEmbeddings method)
(langchain.embeddings.LlamaCppEmbeddings method)
(langchain.embeddings.OpenAIEmbeddings method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-12 | (langchain.embeddings.OpenAIEmbeddings method)
(langchain.embeddings.SagemakerEndpointEmbeddings method)
(langchain.embeddings.SelfHostedEmbeddings method)
(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings method)
(langchain.embeddings.TensorflowHubEmbeddings method)
embedding (langchain.llms.GPT4All attribute)
endpoint_kwargs (langchain.embeddings.SagemakerEndpointEmbeddings attribute)
(langchain.llms.SagemakerEndpoint attribute)
endpoint_name (langchain.embeddings.SagemakerEndpointEmbeddings attribute)
(langchain.llms.SagemakerEndpoint attribute)
endpoint_url (langchain.llms.CerebriumAI attribute)
(langchain.llms.ForefrontAI attribute)
(langchain.llms.HuggingFaceEndpoint attribute)
(langchain.llms.Modal attribute)
engines (langchain.utilities.searx_search.SearxSearchWrapper attribute)
entity_extraction_chain (langchain.chains.GraphQAChain attribute)
error (langchain.chains.OpenAIModerationChain attribute)
example_keys (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute)
example_prompt (langchain.prompts.example_selector.LengthBasedExampleSelector attribute)
(langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
example_selector (langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
example_separator (langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
examples (langchain.prompts.example_selector.LengthBasedExampleSelector attribute)
(langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
F
f16_kv (langchain.embeddings.LlamaCppEmbeddings attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-13 | F
f16_kv (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
FAISS (class in langchain.vectorstores)
fetch_k (langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector attribute)
format() (langchain.prompts.BaseChatPromptTemplate method)
(langchain.prompts.BasePromptTemplate method)
(langchain.prompts.ChatPromptTemplate method)
(langchain.prompts.FewShotPromptTemplate method)
(langchain.prompts.FewShotPromptWithTemplates method)
(langchain.prompts.PromptTemplate method)
format_messages() (langchain.prompts.BaseChatPromptTemplate method)
(langchain.prompts.ChatPromptTemplate method)
(langchain.prompts.MessagesPlaceholder method)
format_prompt() (langchain.prompts.BaseChatPromptTemplate method)
(langchain.prompts.BasePromptTemplate method)
(langchain.prompts.StringPromptTemplate method)
frequency_penalty (langchain.llms.AlephAlpha attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.Cohere attribute)
(langchain.llms.GooseAI attribute)
frequencyPenalty (langchain.llms.AI21 attribute)
from_agent_and_tools() (langchain.agents.AgentExecutor class method)
from_api_operation() (langchain.chains.OpenAPIEndpointChain class method)
from_chains() (langchain.agents.MRKLChain class method)
from_colored_object_prompt() (langchain.chains.PALChain class method)
from_documents() (langchain.vectorstores.AtlasDB class method)
(langchain.vectorstores.Chroma class method)
(langchain.vectorstores.VectorStore class method)
from_embeddings() (langchain.vectorstores.Annoy class method)
(langchain.vectorstores.FAISS class method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-14 | (langchain.vectorstores.FAISS class method)
from_examples() (langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector class method)
(langchain.prompts.example_selector.SemanticSimilarityExampleSelector class method)
(langchain.prompts.PromptTemplate class method)
from_existing_index() (langchain.vectorstores.Pinecone class method)
from_file() (langchain.prompts.PromptTemplate class method)
from_huggingface_tokenizer() (langchain.text_splitter.TextSplitter class method)
from_llm() (langchain.chains.ChatVectorDBChain class method)
(langchain.chains.ConstitutionalChain class method)
(langchain.chains.ConversationalRetrievalChain class method)
(langchain.chains.GraphQAChain class method)
(langchain.chains.HypotheticalDocumentEmbedder class method)
(langchain.chains.QAGenerationChain class method)
(langchain.chains.SQLDatabaseSequentialChain class method)
from_llm_and_api_docs() (langchain.chains.APIChain class method)
from_llm_and_tools() (langchain.agents.Agent class method)
(langchain.agents.BaseSingleActionAgent class method)
(langchain.agents.ConversationalAgent class method)
(langchain.agents.ConversationalChatAgent class method)
(langchain.agents.ZeroShotAgent class method)
from_math_prompt() (langchain.chains.PALChain class method)
from_model_id() (langchain.llms.HuggingFacePipeline class method)
from_params() (langchain.chains.MapReduceChain class method)
from_pipeline() (langchain.llms.SelfHostedHuggingFaceLLM class method)
(langchain.llms.SelfHostedPipeline class method)
from_string() (langchain.chains.LLMChain class method)
from_template() (langchain.prompts.PromptTemplate class method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-15 | from_template() (langchain.prompts.PromptTemplate class method)
from_texts() (langchain.vectorstores.Annoy class method)
(langchain.vectorstores.AtlasDB class method)
(langchain.vectorstores.Chroma class method)
(langchain.vectorstores.DeepLake class method)
(langchain.vectorstores.ElasticVectorSearch class method)
(langchain.vectorstores.FAISS class method)
(langchain.vectorstores.Milvus class method)
(langchain.vectorstores.OpenSearchVectorSearch class method)
(langchain.vectorstores.Pinecone class method)
(langchain.vectorstores.Qdrant class method)
(langchain.vectorstores.SupabaseVectorStore class method)
(langchain.vectorstores.VectorStore class method)
(langchain.vectorstores.Weaviate class method)
from_tiktoken_encoder() (langchain.text_splitter.TextSplitter class method)
from_url_and_method() (langchain.chains.OpenAPIEndpointChain class method)
func (langchain.agents.Tool attribute)
G
generate() (langchain.chains.LLMChain method)
(langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-16 | (langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
generate_prompt() (langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-17 | (langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
get_all_tool_names() (in module langchain.agents)
get_allowed_tools() (langchain.agents.Agent method)
(langchain.agents.BaseMultiActionAgent method)
(langchain.agents.BaseSingleActionAgent method)
get_answer_expr (langchain.chains.PALChain attribute)
get_full_inputs() (langchain.agents.Agent method)
get_num_tokens() (langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-18 | (langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
get_num_tokens_from_messages() (langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-19 | (langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
get_params() (langchain.serpapi.SerpAPIWrapper method)
get_principles() (langchain.chains.ConstitutionalChain class method)
get_sub_prompts() (langchain.llms.AzureOpenAI method)
(langchain.llms.OpenAI method)
(langchain.llms.PromptLayerOpenAI method)
get_text_length (langchain.prompts.example_selector.LengthBasedExampleSelector attribute)
globals (langchain.python.PythonREPL attribute)
graph (langchain.chains.GraphQAChain attribute)
H
hardware (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
(langchain.llms.SelfHostedPipeline attribute)
headers (langchain.utilities.searx_search.SearxSearchWrapper attribute)
hosting (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)
I
inference_fn (langchain.embeddings.SelfHostedEmbeddings attribute)
(langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
(langchain.llms.SelfHostedPipeline attribute)
inference_kwargs (langchain.embeddings.SelfHostedEmbeddings attribute)
initialize_agent() (in module langchain.agents)
InMemoryDocstore (class in langchain.docstore)
input_key (langchain.chains.QAGenerationChain attribute)
input_keys (langchain.chains.ConstitutionalChain property)
(langchain.chains.ConversationChain property)
(langchain.chains.HypotheticalDocumentEmbedder property)
(langchain.chains.QAGenerationChain property)
(langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-20 | (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute)
input_variables (langchain.chains.SequentialChain attribute)
(langchain.chains.TransformChain attribute)
(langchain.prompts.BasePromptTemplate attribute)
(langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
(langchain.prompts.MessagesPlaceholder property)
(langchain.prompts.PromptTemplate attribute)
J
json() (langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-21 | (langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
K
k (langchain.chains.QAGenerationChain attribute)
(langchain.chains.VectorDBQA attribute)
(langchain.chains.VectorDBQAWithSourcesChain attribute)
(langchain.llms.Cohere attribute)
(langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute)
(langchain.utilities.searx_search.SearxSearchWrapper attribute)
L
langchain.agents
module
langchain.chains
module
langchain.docstore
module
langchain.embeddings
module
langchain.llms
module
langchain.prompts
module
langchain.prompts.example_selector
module
langchain.python
module
langchain.serpapi
module
langchain.text_splitter
module
langchain.utilities.searx_search
module
langchain.vectorstores
module
last_n_tokens_size (langchain.llms.LlamaCpp attribute)
LatexTextSplitter (class in langchain.text_splitter)
length (langchain.llms.ForefrontAI attribute)
(langchain.llms.Writer attribute)
length_no_input (langchain.llms.NLPCloud attribute)
length_penalty (langchain.llms.NLPCloud attribute)
length_pentaly (langchain.llms.Writer attribute)
list_assertions_prompt (langchain.chains.LLMCheckerChain attribute)
llm (langchain.chains.LLMBashChain attribute)
(langchain.chains.LLMChain attribute)
(langchain.chains.LLMCheckerChain attribute)
(langchain.chains.LLMMathChain attribute)
(langchain.chains.LLMSummarizationCheckerChain attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-22 | (langchain.chains.LLMSummarizationCheckerChain attribute)
(langchain.chains.PALChain attribute)
(langchain.chains.SQLDatabaseChain attribute)
llm_chain (langchain.agents.Agent attribute)
(langchain.agents.LLMSingleActionAgent attribute)
(langchain.chains.HypotheticalDocumentEmbedder attribute)
(langchain.chains.LLMRequestsChain attribute)
(langchain.chains.QAGenerationChain attribute)
llm_prefix (langchain.agents.Agent property)
(langchain.agents.ConversationalAgent property)
(langchain.agents.ConversationalChatAgent property)
(langchain.agents.ZeroShotAgent property)
load_agent() (in module langchain.agents)
load_chain() (in module langchain.chains)
load_fn_kwargs (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
(langchain.llms.SelfHostedPipeline attribute)
load_local() (langchain.vectorstores.Annoy class method)
(langchain.vectorstores.FAISS class method)
load_prompt() (in module langchain.prompts)
load_tools() (in module langchain.agents)
locals (langchain.python.PythonREPL attribute)
log_probs (langchain.llms.AlephAlpha attribute)
logit_bias (langchain.llms.AlephAlpha attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.GooseAI attribute)
logitBias (langchain.llms.AI21 attribute)
logits_all (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
logprobs (langchain.llms.LlamaCpp attribute)
(langchain.llms.Writer attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-23 | (langchain.llms.Writer attribute)
lookup_tool() (langchain.agents.AgentExecutor method)
M
MarkdownTextSplitter (class in langchain.text_splitter)
max_checks (langchain.chains.LLMSummarizationCheckerChain attribute)
max_execution_time (langchain.agents.AgentExecutor attribute)
(langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
max_iterations (langchain.agents.AgentExecutor attribute)
(langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
max_length (langchain.llms.NLPCloud attribute)
(langchain.llms.Petals attribute)
(langchain.prompts.example_selector.LengthBasedExampleSelector attribute)
max_marginal_relevance_search() (langchain.vectorstores.Annoy method)
(langchain.vectorstores.Chroma method)
(langchain.vectorstores.DeepLake method)
(langchain.vectorstores.FAISS method)
(langchain.vectorstores.Milvus method)
(langchain.vectorstores.Qdrant method)
(langchain.vectorstores.SupabaseVectorStore method)
(langchain.vectorstores.VectorStore method)
(langchain.vectorstores.Weaviate method)
max_marginal_relevance_search_by_vector() (langchain.vectorstores.Annoy method)
(langchain.vectorstores.Chroma method)
(langchain.vectorstores.DeepLake method)
(langchain.vectorstores.FAISS method)
(langchain.vectorstores.SupabaseVectorStore method)
(langchain.vectorstores.VectorStore method)
max_new_tokens (langchain.llms.Petals attribute)
max_retries (langchain.embeddings.OpenAIEmbeddings attribute)
(langchain.llms.AzureOpenAI attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-24 | (langchain.llms.AzureOpenAI attribute)
(langchain.llms.OpenAIChat attribute)
(langchain.llms.PromptLayerOpenAIChat attribute)
max_tokens (langchain.llms.AzureOpenAI attribute)
(langchain.llms.Cohere attribute)
(langchain.llms.GooseAI attribute)
(langchain.llms.LlamaCpp attribute)
max_tokens_for_prompt() (langchain.llms.AzureOpenAI method)
(langchain.llms.OpenAI method)
(langchain.llms.PromptLayerOpenAI method)
max_tokens_limit (langchain.chains.ConversationalRetrievalChain attribute)
(langchain.chains.RetrievalQAWithSourcesChain attribute)
(langchain.chains.VectorDBQAWithSourcesChain attribute)
max_tokens_per_generation (langchain.llms.RWKV attribute)
max_tokens_to_sample (langchain.llms.Anthropic attribute)
maximum_tokens (langchain.llms.AlephAlpha attribute)
maxTokens (langchain.llms.AI21 attribute)
memory (langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
(langchain.chains.ConversationChain attribute)
merge_from() (langchain.vectorstores.FAISS method)
METADATA_KEY (langchain.vectorstores.Qdrant attribute)
Milvus (class in langchain.vectorstores)
min_length (langchain.llms.NLPCloud attribute)
min_tokens (langchain.llms.GooseAI attribute)
minimum_tokens (langchain.llms.AlephAlpha attribute)
minTokens (langchain.llms.AI21 attribute)
model (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)
(langchain.embeddings.CohereEmbeddings attribute)
(langchain.llms.AI21 attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-25 | (langchain.embeddings.CohereEmbeddings attribute)
(langchain.llms.AI21 attribute)
(langchain.llms.AlephAlpha attribute)
(langchain.llms.Anthropic attribute)
(langchain.llms.Cohere attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.RWKV attribute)
model_id (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)
(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute)
(langchain.llms.HuggingFacePipeline attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
(langchain.llms.Writer attribute)
model_key (langchain.llms.Banana attribute)
model_kwargs (langchain.embeddings.HuggingFaceEmbeddings attribute)
(langchain.embeddings.HuggingFaceHubEmbeddings attribute)
(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)
(langchain.embeddings.SagemakerEndpointEmbeddings attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.Banana attribute)
(langchain.llms.CerebriumAI attribute)
(langchain.llms.GooseAI attribute)
(langchain.llms.HuggingFaceEndpoint attribute)
(langchain.llms.HuggingFaceHub attribute)
(langchain.llms.HuggingFacePipeline attribute)
(langchain.llms.Modal attribute)
(langchain.llms.OpenAIChat attribute)
(langchain.llms.Petals attribute)
(langchain.llms.PromptLayerOpenAIChat attribute)
(langchain.llms.SagemakerEndpoint attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
(langchain.llms.StochasticAI attribute)
model_load_fn (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-26 | model_load_fn (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
(langchain.llms.SelfHostedPipeline attribute)
model_name (langchain.chains.OpenAIModerationChain attribute)
(langchain.embeddings.HuggingFaceEmbeddings attribute)
(langchain.embeddings.HuggingFaceInstructEmbeddings attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.GooseAI attribute)
(langchain.llms.NLPCloud attribute)
(langchain.llms.OpenAIChat attribute)
(langchain.llms.Petals attribute)
(langchain.llms.PromptLayerOpenAIChat attribute)
model_path (langchain.llms.LlamaCpp attribute)
model_reqs (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)
(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
(langchain.llms.SelfHostedPipeline attribute)
model_url (langchain.embeddings.TensorflowHubEmbeddings attribute)
modelname_to_contextsize() (langchain.llms.AzureOpenAI method)
(langchain.llms.OpenAI method)
(langchain.llms.PromptLayerOpenAI method)
module
langchain.agents
langchain.chains
langchain.docstore
langchain.embeddings
langchain.llms
langchain.prompts
langchain.prompts.example_selector
langchain.python
langchain.serpapi
langchain.text_splitter
langchain.utilities.searx_search
langchain.vectorstores
N
n (langchain.llms.AlephAlpha attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.GooseAI attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-27 | (langchain.llms.AzureOpenAI attribute)
(langchain.llms.GooseAI attribute)
n_batch (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
n_ctx (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
n_parts (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
n_predict (langchain.llms.GPT4All attribute)
n_threads (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
NLTKTextSplitter (class in langchain.text_splitter)
normalize (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute)
num_beams (langchain.llms.NLPCloud attribute)
num_return_sequences (langchain.llms.NLPCloud attribute)
numResults (langchain.llms.AI21 attribute)
O
observation_prefix (langchain.agents.Agent property)
(langchain.agents.ConversationalAgent property)
(langchain.agents.ConversationalChatAgent property)
(langchain.agents.ZeroShotAgent property)
openai_api_key (langchain.chains.OpenAIModerationChain attribute)
openai_organization (langchain.chains.OpenAIModerationChain attribute)
OpenSearchVectorSearch (class in langchain.vectorstores)
output_key (langchain.chains.QAGenerationChain attribute)
output_keys (langchain.chains.ConstitutionalChain property)
(langchain.chains.HypotheticalDocumentEmbedder property)
(langchain.chains.QAGenerationChain property) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-28 | (langchain.chains.QAGenerationChain property)
output_parser (langchain.agents.Agent attribute)
(langchain.agents.ConversationalAgent attribute)
(langchain.agents.ConversationalChatAgent attribute)
(langchain.agents.LLMSingleActionAgent attribute)
(langchain.agents.ReActTextWorldAgent attribute)
(langchain.agents.ZeroShotAgent attribute)
(langchain.prompts.BasePromptTemplate attribute)
output_variables (langchain.chains.TransformChain attribute)
P
p (langchain.llms.Cohere attribute)
param_mapping (langchain.chains.OpenAPIEndpointChain attribute)
params (langchain.serpapi.SerpAPIWrapper attribute)
(langchain.utilities.searx_search.SearxSearchWrapper attribute)
parse() (langchain.agents.AgentOutputParser method)
partial() (langchain.prompts.BasePromptTemplate method)
(langchain.prompts.ChatPromptTemplate method)
penalty_alpha_frequency (langchain.llms.RWKV attribute)
penalty_alpha_presence (langchain.llms.RWKV attribute)
penalty_bias (langchain.llms.AlephAlpha attribute)
penalty_exceptions (langchain.llms.AlephAlpha attribute)
penalty_exceptions_include_stop_sequences (langchain.llms.AlephAlpha attribute)
persist() (langchain.vectorstores.Chroma method)
(langchain.vectorstores.DeepLake method)
Pinecone (class in langchain.vectorstores)
plan() (langchain.agents.Agent method)
(langchain.agents.BaseMultiActionAgent method)
(langchain.agents.BaseSingleActionAgent method)
(langchain.agents.LLMSingleActionAgent method)
predict() (langchain.chains.LLMChain method)
predict_and_parse() (langchain.chains.LLMChain method)
prefix (langchain.prompts.FewShotPromptTemplate attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-29 | prefix (langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
prefix_messages (langchain.llms.OpenAIChat attribute)
(langchain.llms.PromptLayerOpenAIChat attribute)
prep_prompts() (langchain.chains.LLMChain method)
prep_streaming_params() (langchain.llms.AzureOpenAI method)
(langchain.llms.OpenAI method)
(langchain.llms.PromptLayerOpenAI method)
presence_penalty (langchain.llms.AlephAlpha attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.Cohere attribute)
(langchain.llms.GooseAI attribute)
presencePenalty (langchain.llms.AI21 attribute)
process_index_results() (langchain.vectorstores.Annoy method)
Prompt (in module langchain.prompts)
prompt (langchain.chains.ConversationChain attribute)
(langchain.chains.LLMBashChain attribute)
(langchain.chains.LLMChain attribute)
(langchain.chains.LLMMathChain attribute)
(langchain.chains.PALChain attribute)
(langchain.chains.SQLDatabaseChain attribute)
python_globals (langchain.chains.PALChain attribute)
python_locals (langchain.chains.PALChain attribute)
PythonCodeTextSplitter (class in langchain.text_splitter)
Q
qa_chain (langchain.chains.GraphQAChain attribute)
Qdrant (class in langchain.vectorstores)
query_instruction (langchain.embeddings.HuggingFaceInstructEmbeddings attribute)
(langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute)
query_name (langchain.vectorstores.SupabaseVectorStore attribute)
query_suffix (langchain.utilities.searx_search.SearxSearchWrapper attribute)
R | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-30 | query_suffix (langchain.utilities.searx_search.SearxSearchWrapper attribute)
R
random_seed (langchain.llms.Writer attribute)
raw_completion (langchain.llms.AlephAlpha attribute)
REACT_DOCSTORE (langchain.agents.AgentType attribute)
RecursiveCharacterTextSplitter (class in langchain.text_splitter)
reduce_k_below_max_tokens (langchain.chains.RetrievalQAWithSourcesChain attribute)
(langchain.chains.VectorDBQAWithSourcesChain attribute)
region_name (langchain.embeddings.SagemakerEndpointEmbeddings attribute)
(langchain.llms.SagemakerEndpoint attribute)
remove_end_sequence (langchain.llms.NLPCloud attribute)
remove_input (langchain.llms.NLPCloud attribute)
repeat_last_n (langchain.llms.GPT4All attribute)
repeat_penalty (langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
repetition_penalties_include_completion (langchain.llms.AlephAlpha attribute)
repetition_penalties_include_prompt (langchain.llms.AlephAlpha attribute)
repetition_penalty (langchain.llms.ForefrontAI attribute)
(langchain.llms.NLPCloud attribute)
(langchain.llms.Writer attribute)
repo_id (langchain.embeddings.HuggingFaceHubEmbeddings attribute)
(langchain.llms.HuggingFaceHub attribute)
request_timeout (langchain.llms.AzureOpenAI attribute)
requests (langchain.chains.OpenAPIEndpointChain attribute)
requests_wrapper (langchain.chains.APIChain attribute)
(langchain.chains.LLMRequestsChain attribute)
results() (langchain.serpapi.SerpAPIWrapper method)
(langchain.utilities.searx_search.SearxSearchWrapper method)
retriever (langchain.chains.ConversationalRetrievalChain attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-31 | retriever (langchain.chains.ConversationalRetrievalChain attribute)
(langchain.chains.RetrievalQA attribute)
(langchain.chains.RetrievalQAWithSourcesChain attribute)
return_all (langchain.chains.SequentialChain attribute)
return_direct (langchain.chains.SQLDatabaseChain attribute)
return_intermediate_steps (langchain.agents.AgentExecutor attribute)
(langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
(langchain.chains.OpenAPIEndpointChain attribute)
(langchain.chains.PALChain attribute)
(langchain.chains.SQLDatabaseChain attribute)
(langchain.chains.SQLDatabaseSequentialChain attribute)
return_stopped_response() (langchain.agents.Agent method)
(langchain.agents.BaseMultiActionAgent method)
(langchain.agents.BaseSingleActionAgent method)
return_values (langchain.agents.Agent property)
(langchain.agents.BaseMultiActionAgent property)
(langchain.agents.BaseSingleActionAgent property)
revised_answer_prompt (langchain.chains.LLMCheckerChain attribute)
revised_summary_prompt (langchain.chains.LLMSummarizationCheckerChain attribute)
revision_chain (langchain.chains.ConstitutionalChain attribute)
run() (langchain.python.PythonREPL method)
(langchain.serpapi.SerpAPIWrapper method)
(langchain.utilities.searx_search.SearxSearchWrapper method)
rwkv_verbose (langchain.llms.RWKV attribute)
S
save() (langchain.agents.AgentExecutor method)
(langchain.agents.BaseMultiActionAgent method)
(langchain.agents.BaseSingleActionAgent method)
(langchain.llms.AI21 method)
(langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-32 | (langchain.llms.AlephAlpha method)
(langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.Banana method)
(langchain.llms.CerebriumAI method)
(langchain.llms.Cohere method)
(langchain.llms.DeepInfra method)
(langchain.llms.ForefrontAI method)
(langchain.llms.GooseAI method)
(langchain.llms.GPT4All method)
(langchain.llms.HuggingFaceEndpoint method)
(langchain.llms.HuggingFaceHub method)
(langchain.llms.HuggingFacePipeline method)
(langchain.llms.LlamaCpp method)
(langchain.llms.Modal method)
(langchain.llms.NLPCloud method)
(langchain.llms.OpenAI method)
(langchain.llms.OpenAIChat method)
(langchain.llms.Petals method)
(langchain.llms.PromptLayerOpenAI method)
(langchain.llms.PromptLayerOpenAIChat method)
(langchain.llms.Replicate method)
(langchain.llms.RWKV method)
(langchain.llms.SagemakerEndpoint method)
(langchain.llms.SelfHostedHuggingFaceLLM method)
(langchain.llms.SelfHostedPipeline method)
(langchain.llms.StochasticAI method)
(langchain.llms.Writer method)
(langchain.prompts.BasePromptTemplate method)
(langchain.prompts.ChatPromptTemplate method)
save_agent() (langchain.agents.AgentExecutor method)
save_local() (langchain.vectorstores.Annoy method)
(langchain.vectorstores.FAISS method)
search() (langchain.docstore.InMemoryDocstore method)
(langchain.docstore.Wikipedia method)
(langchain.vectorstores.DeepLake method)
search_kwargs (langchain.chains.ChatVectorDBChain attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-33 | search_kwargs (langchain.chains.ChatVectorDBChain attribute)
(langchain.chains.VectorDBQA attribute)
(langchain.chains.VectorDBQAWithSourcesChain attribute)
search_type (langchain.chains.VectorDBQA attribute)
searx_host (langchain.utilities.searx_search.SearxSearchWrapper attribute)
SearxResults (class in langchain.utilities.searx_search)
seed (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
select_examples() (langchain.prompts.example_selector.LengthBasedExampleSelector method)
(langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector method)
(langchain.prompts.example_selector.SemanticSimilarityExampleSelector method)
SELF_ASK_WITH_SEARCH (langchain.agents.AgentType attribute)
serpapi_api_key (langchain.serpapi.SerpAPIWrapper attribute)
similarity_search() (langchain.vectorstores.Annoy method)
(langchain.vectorstores.AtlasDB method)
(langchain.vectorstores.Chroma method)
(langchain.vectorstores.DeepLake method)
(langchain.vectorstores.ElasticVectorSearch method)
(langchain.vectorstores.FAISS method)
(langchain.vectorstores.Milvus method)
(langchain.vectorstores.OpenSearchVectorSearch method)
(langchain.vectorstores.Pinecone method)
(langchain.vectorstores.Qdrant method)
(langchain.vectorstores.SupabaseVectorStore method)
(langchain.vectorstores.VectorStore method)
(langchain.vectorstores.Weaviate method)
similarity_search_by_index() (langchain.vectorstores.Annoy method)
similarity_search_by_vector() (langchain.vectorstores.Annoy method)
(langchain.vectorstores.Chroma method)
(langchain.vectorstores.DeepLake method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-34 | (langchain.vectorstores.Chroma method)
(langchain.vectorstores.DeepLake method)
(langchain.vectorstores.FAISS method)
(langchain.vectorstores.SupabaseVectorStore method)
(langchain.vectorstores.VectorStore method)
(langchain.vectorstores.Weaviate method)
similarity_search_by_vector_returning_embeddings() (langchain.vectorstores.SupabaseVectorStore method)
similarity_search_by_vector_with_relevance_scores() (langchain.vectorstores.SupabaseVectorStore method)
similarity_search_with_relevance_scores() (langchain.vectorstores.SupabaseVectorStore method)
(langchain.vectorstores.VectorStore method)
similarity_search_with_score() (langchain.vectorstores.Annoy method)
(langchain.vectorstores.Chroma method)
(langchain.vectorstores.DeepLake method)
(langchain.vectorstores.FAISS method)
(langchain.vectorstores.Milvus method)
(langchain.vectorstores.Pinecone method)
(langchain.vectorstores.Qdrant method)
similarity_search_with_score_by_index() (langchain.vectorstores.Annoy method)
similarity_search_with_score_by_vector() (langchain.vectorstores.Annoy method)
(langchain.vectorstores.FAISS method)
SpacyTextSplitter (class in langchain.text_splitter)
split_documents() (langchain.text_splitter.TextSplitter method)
split_text() (langchain.text_splitter.CharacterTextSplitter method)
(langchain.text_splitter.NLTKTextSplitter method)
(langchain.text_splitter.RecursiveCharacterTextSplitter method)
(langchain.text_splitter.SpacyTextSplitter method)
(langchain.text_splitter.TextSplitter method)
(langchain.text_splitter.TokenTextSplitter method)
sql_chain (langchain.chains.SQLDatabaseSequentialChain attribute)
stop (langchain.agents.LLMSingleActionAgent attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-35 | stop (langchain.agents.LLMSingleActionAgent attribute)
(langchain.chains.PALChain attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
(langchain.llms.Writer attribute)
stop_sequences (langchain.llms.AlephAlpha attribute)
strategy (langchain.llms.RWKV attribute)
stream() (langchain.llms.Anthropic method)
(langchain.llms.AzureOpenAI method)
(langchain.llms.OpenAI method)
(langchain.llms.PromptLayerOpenAI method)
streaming (langchain.llms.Anthropic attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.OpenAIChat attribute)
(langchain.llms.PromptLayerOpenAIChat attribute)
strip_outputs (langchain.chains.SimpleSequentialChain attribute)
suffix (langchain.llms.LlamaCpp attribute)
(langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
SupabaseVectorStore (class in langchain.vectorstores)
T
table_name (langchain.vectorstores.SupabaseVectorStore attribute)
task (langchain.embeddings.HuggingFaceHubEmbeddings attribute)
(langchain.llms.HuggingFaceEndpoint attribute)
(langchain.llms.HuggingFaceHub attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
temp (langchain.llms.GPT4All attribute)
temperature (langchain.llms.AI21 attribute)
(langchain.llms.AlephAlpha attribute)
(langchain.llms.Anthropic attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.Cohere attribute)
(langchain.llms.ForefrontAI attribute)
(langchain.llms.GooseAI attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-36 | (langchain.llms.ForefrontAI attribute)
(langchain.llms.GooseAI attribute)
(langchain.llms.LlamaCpp attribute)
(langchain.llms.NLPCloud attribute)
(langchain.llms.Petals attribute)
(langchain.llms.RWKV attribute)
(langchain.llms.Writer attribute)
template (langchain.prompts.PromptTemplate attribute)
template_format (langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
(langchain.prompts.PromptTemplate attribute)
text_length (langchain.chains.LLMRequestsChain attribute)
text_splitter (langchain.chains.AnalyzeDocumentChain attribute)
(langchain.chains.MapReduceChain attribute)
(langchain.chains.QAGenerationChain attribute)
TextSplitter (class in langchain.text_splitter)
tokenizer (langchain.llms.Petals attribute)
tokens (langchain.llms.AlephAlpha attribute)
tokens_path (langchain.llms.RWKV attribute)
tokens_to_generate (langchain.llms.Writer attribute)
TokenTextSplitter (class in langchain.text_splitter)
tool() (in module langchain.agents)
tool_run_logging_kwargs() (langchain.agents.Agent method)
(langchain.agents.BaseMultiActionAgent method)
(langchain.agents.BaseSingleActionAgent method)
(langchain.agents.LLMSingleActionAgent method)
tools (langchain.agents.AgentExecutor attribute)
(langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
top_k (langchain.chains.SQLDatabaseChain attribute)
(langchain.llms.AlephAlpha attribute)
(langchain.llms.Anthropic attribute)
(langchain.llms.ForefrontAI attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-37 | (langchain.llms.Anthropic attribute)
(langchain.llms.ForefrontAI attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
(langchain.llms.NLPCloud attribute)
(langchain.llms.Petals attribute)
(langchain.llms.Writer attribute)
top_k_docs_for_context (langchain.chains.ChatVectorDBChain attribute)
top_p (langchain.llms.AlephAlpha attribute)
(langchain.llms.Anthropic attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.ForefrontAI attribute)
(langchain.llms.GooseAI attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
(langchain.llms.NLPCloud attribute)
(langchain.llms.Petals attribute)
(langchain.llms.RWKV attribute)
(langchain.llms.Writer attribute)
topP (langchain.llms.AI21 attribute)
transform (langchain.chains.TransformChain attribute)
transform_documents() (langchain.text_splitter.TextSplitter method)
truncate (langchain.embeddings.CohereEmbeddings attribute)
(langchain.llms.Cohere attribute)
U
unsecure (langchain.utilities.searx_search.SearxSearchWrapper attribute)
update_forward_refs() (langchain.llms.AI21 class method)
(langchain.llms.AlephAlpha class method)
(langchain.llms.Anthropic class method)
(langchain.llms.AzureOpenAI class method)
(langchain.llms.Banana class method)
(langchain.llms.CerebriumAI class method)
(langchain.llms.Cohere class method)
(langchain.llms.DeepInfra class method)
(langchain.llms.ForefrontAI class method)
(langchain.llms.GooseAI class method) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-38 | (langchain.llms.GooseAI class method)
(langchain.llms.GPT4All class method)
(langchain.llms.HuggingFaceEndpoint class method)
(langchain.llms.HuggingFaceHub class method)
(langchain.llms.HuggingFacePipeline class method)
(langchain.llms.LlamaCpp class method)
(langchain.llms.Modal class method)
(langchain.llms.NLPCloud class method)
(langchain.llms.OpenAI class method)
(langchain.llms.OpenAIChat class method)
(langchain.llms.Petals class method)
(langchain.llms.PromptLayerOpenAI class method)
(langchain.llms.PromptLayerOpenAIChat class method)
(langchain.llms.Replicate class method)
(langchain.llms.RWKV class method)
(langchain.llms.SagemakerEndpoint class method)
(langchain.llms.SelfHostedHuggingFaceLLM class method)
(langchain.llms.SelfHostedPipeline class method)
(langchain.llms.StochasticAI class method)
(langchain.llms.Writer class method)
use_mlock (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
use_multiplicative_presence_penalty (langchain.llms.AlephAlpha attribute)
V
validate_template (langchain.prompts.FewShotPromptTemplate attribute)
(langchain.prompts.FewShotPromptWithTemplates attribute)
(langchain.prompts.PromptTemplate attribute)
VectorStore (class in langchain.vectorstores)
vectorstore (langchain.chains.ChatVectorDBChain attribute)
(langchain.chains.VectorDBQA attribute)
(langchain.chains.VectorDBQAWithSourcesChain attribute)
(langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute) | https://python.langchain.com/en/latest/genindex.html |
b9563f5ad006-39 | (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute)
verbose (langchain.agents.MRKLChain attribute)
(langchain.agents.ReActChain attribute)
(langchain.agents.SelfAskWithSearchChain attribute)
(langchain.llms.AzureOpenAI attribute)
(langchain.llms.OpenAI attribute)
(langchain.llms.OpenAIChat attribute)
vocab_only (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
W
Weaviate (class in langchain.vectorstores)
Wikipedia (class in langchain.docstore)
Z
ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/genindex.html |
546a3f48f042-0 | .rst
.pdf
Welcome to LangChain
Contents
Getting Started
Modules
Use Cases
Reference Docs
LangChain Ecosystem
Additional Resources
Welcome to LangChain#
LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:
Be data-aware: connect a language model to other sources of data
Be agentic: allow a language model to interact with its environment
The LangChain framework is designed with the above principles in mind.
This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.
Getting Started#
Checkout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.
Getting Started Documentation
Modules#
There are several main modules that LangChain provides support for.
For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.
These modules are, in increasing order of complexity:
Models: The various model types and model integrations LangChain supports.
Prompts: This includes prompt management, prompt optimization, and prompt serialization.
Memory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
Indexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.
Chains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. | https://python.langchain.com/en/latest/index.html |
546a3f48f042-1 | Agents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
Use Cases#
The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.
Autonomous Agents: Autonomous agents are long running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.
Agent Simulations: Putting agents in a sandbox and observing how they interact with each other or to events can be an interesting way to observe their long-term memory abilities.
Personal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
Question Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
Chatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.
Querying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.
Code Understanding: If you want to understand how to use LLMs to query source code from github, you should read this page.
Interacting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.
Extraction: Extract structured information from text.
Summarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation. | https://python.langchain.com/en/latest/index.html |
546a3f48f042-2 | Evaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
Reference Docs#
All of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.
Reference Documentation
LangChain Ecosystem#
Guides for how other companies/products can be used with LangChain
LangChain Ecosystem
Additional Resources#
Additional collection of resources we think may be useful as you develop your application!
LangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.
Glossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!
Gallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.
Deployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
Tracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.
Model Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
Discord: Join us on our Discord to discuss all things LangChain!
YouTube: A collection of the LangChain tutorials and videos.
Production Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.
next
Quickstart Guide
Contents
Getting Started
Modules
Use Cases
Reference Docs
LangChain Ecosystem
Additional Resources
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/index.html |
546a3f48f042-3 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/index.html |
32944cbf20c7-0 | .md
.pdf
Tracing
Contents
Tracing Walkthrough
Changing Sessions
Tracing#
By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents.
First, you should install tracing and set up your environment properly.
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
If you’re interested in using the hosted platform, please fill out the form here.
Locally Hosted Setup
Cloud Hosted Setup
Tracing Walkthrough#
When you first access the UI, you should see a page with your tracing sessions.
An initial one “default” should already be created for you.
A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says “No Runs.”
You can create a new session with the new session form.
If we click on the default session, we can see that to start we have no traces stored.
If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run this notebook as an example.
After running it, we will see an initial trace show up.
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.
We can also click on the “Explore” button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.
We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
Changing Sessions# | https://python.langchain.com/en/latest/tracing.html |
32944cbf20c7-1 | Changing Sessions#
To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session you want to record to:
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI.
To switch sessions mid-script or mid-notebook, do NOT set the LANGCHAIN_SESSION environment variable. Instead: langchain.set_tracing_callback_manager(session_name="my_session")
previous
Deployments
next
YouTube
Contents
Tracing Walkthrough
Changing Sessions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/tracing.html |
744b12cd7ecc-0 | .rst
.pdf
LangChain Ecosystem
LangChain Ecosystem#
Guides for how other companies/products can be used with LangChain
AI21 Labs
Aim
Apify
AtlasDB
Banana
CerebriumAI
Chroma
ClearML Integration
Cohere
Comet
Databerry
DeepInfra
Deep Lake
ForefrontAI
Google Search Wrapper
Google Serper Wrapper
GooseAI
GPT4All
Graphsignal
Hazy Research
Helicone
Hugging Face
Jina
Llama.cpp
Milvus
Modal
NLPCloud
OpenAI
OpenSearch
Petals
PGVector
Pinecone
PromptLayer
Qdrant
Replicate
Runhouse
RWKV-4
SearxNG Search API
SerpAPI
StochasticAI
Unstructured
Weights & Biases
Weaviate
Wolfram Alpha Wrapper
Writer
Yeager.ai
Zilliz
previous
Agents
next
AI21 Labs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem.html |
73454b1e50a0-0 | .md
.pdf
Deployments
Contents
Streamlit
Gradio (on Hugging Face)
Beam
Vercel
Digitalocean App Platform
SteamShip
Langchain-serve
BentoML
Deployments#
So you’ve made a really cool chain - now what? How do you deploy it and make it easily sharable with the world?
This section covers several options for that.
Note that these are meant as quick deployment options for prototypes and demos, and not for production systems.
If you are looking for help with deployment of a production system, please contact us directly.
What follows is a list of template GitHub repositories aimed that are intended to be
very easy to fork and modify to use your chain.
This is far from an exhaustive list of options, and we are EXTREMELY open to contributions here.
Streamlit#
This repo serves as a template for how to deploy a LangChain with Streamlit.
It implements a chatbot interface.
It also contains instructions for how to deploy this app on the Streamlit platform.
Gradio (on Hugging Face)#
This repo serves as a template for how deploy a LangChain with Gradio.
It implements a chatbot interface, with a “Bring-Your-Own-Token” approach (nice for not wracking up big bills).
It also contains instructions for how to deploy this app on the Hugging Face platform.
This is heavily influenced by James Weaver’s excellent examples.
Beam#
This repo serves as a template for how deploy a LangChain with Beam.
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
Vercel#
A minimal example on how to run LangChain on Vercel using Flask.
Digitalocean App Platform#
A minimal example on how to deploy LangChain to DigitalOcean App Platform.
SteamShip# | https://python.langchain.com/en/latest/deployments.html |
73454b1e50a0-1 | A minimal example on how to deploy LangChain to DigitalOcean App Platform.
SteamShip#
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship.
This includes: production ready endpoints, horizontal scaling across dependencies, persistant storage of app state, multi-tenancy support, etc.
Langchain-serve#
This repository allows users to serve local chains and agents as RESTful, gRPC, or Websocket APIs thanks to Jina. Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud.
BentoML#
This repository provides an example of how to deploy a LangChain application with BentoML. BentoML is a framework that enables the containerization of machine learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and gRPC endpoints. With BentoML, you can integrate models from all popular ML frameworks and deploy them as microservices running on the most optimal hardware and scaling independently.
previous
LangChain Gallery
next
Tracing
Contents
Streamlit
Gradio (on Hugging Face)
Beam
Vercel
Digitalocean App Platform
SteamShip
Langchain-serve
BentoML
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/deployments.html |
8ff2f9c59045-0 | Source code for langchain.text_splitter
"""Functionality for splitting text."""
from __future__ import annotations
import copy
import logging
from abc import ABC, abstractmethod
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Iterable,
List,
Literal,
Optional,
Sequence,
Union,
)
from langchain.docstore.document import Document
from langchain.schema import BaseDocumentTransformer
logger = logging.getLogger(__name__)
[docs]class TextSplitter(BaseDocumentTransformer, ABC):
"""Interface for splitting text into chunks."""
def __init__(
self,
chunk_size: int = 4000,
chunk_overlap: int = 200,
length_function: Callable[[str], int] = len,
):
"""Create a new TextSplitter."""
if chunk_overlap > chunk_size:
raise ValueError(
f"Got a larger chunk overlap ({chunk_overlap}) than chunk size "
f"({chunk_size}), should be smaller."
)
self._chunk_size = chunk_size
self._chunk_overlap = chunk_overlap
self._length_function = length_function
[docs] @abstractmethod
def split_text(self, text: str) -> List[str]:
"""Split text into multiple components."""
[docs] def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
for chunk in self.split_text(text):
new_doc = Document( | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-1 | for chunk in self.split_text(text):
new_doc = Document(
page_content=chunk, metadata=copy.deepcopy(_metadatas[i])
)
documents.append(new_doc)
return documents
[docs] def split_documents(self, documents: List[Document]) -> List[Document]:
"""Split documents."""
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return self.create_documents(texts, metadatas=metadatas)
def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:
text = separator.join(docs)
text = text.strip()
if text == "":
return None
else:
return text
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
# We now want to combine these smaller pieces into medium size
# chunks to send to the LLM.
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
# Keep on popping if: | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-2 | docs.append(doc)
# Keep on popping if:
# - we have a larger chunk than in the chunk overlap
# - or if we still have any chunks and the length is long
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
[docs] @classmethod
def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:
"""Text splitter that uses HuggingFace tokenizer to count length."""
try:
from transformers import PreTrainedTokenizerBase
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise ValueError(
"Tokenizer received was not an instance of PreTrainedTokenizerBase"
)
def _huggingface_tokenizer_length(text: str) -> int:
return len(tokenizer.encode(text))
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
)
return cls(length_function=_huggingface_tokenizer_length, **kwargs)
[docs] @classmethod
def from_tiktoken_encoder(
cls, | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-3 | [docs] @classmethod
def from_tiktoken_encoder(
cls,
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> TextSplitter:
"""Text splitter that uses tiktoken encoder to count length."""
try:
import tiktoken
except ImportError:
raise ValueError(
"Could not import tiktoken python package. "
"This is needed in order to calculate max_tokens_for_prompt. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
def _tiktoken_encoder(text: str, **kwargs: Any) -> int:
return len(
enc.encode(
text,
allowed_special=allowed_special,
disallowed_special=disallowed_special,
**kwargs,
)
)
return cls(length_function=_tiktoken_encoder, **kwargs)
[docs] def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Transform sequence of documents by splitting them."""
return self.split_documents(list(documents))
[docs] async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Asynchronously transform a sequence of documents by splitting them."""
raise NotImplementedError | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-4 | """Asynchronously transform a sequence of documents by splitting them."""
raise NotImplementedError
[docs]class CharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters."""
def __init__(self, separator: str = "\n\n", **kwargs: Any):
"""Create a new TextSplitter."""
super().__init__(**kwargs)
self._separator = separator
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
if self._separator:
splits = text.split(self._separator)
else:
splits = list(text)
return self._merge_splits(splits, self._separator)
[docs]class TokenTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at tokens."""
def __init__(
self,
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
):
"""Create a new TextSplitter."""
super().__init__(**kwargs)
try:
import tiktoken
except ImportError:
raise ValueError(
"Could not import tiktoken python package. "
"This is needed in order to for TokenTextSplitter. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else: | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-5 | enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
self._tokenizer = enc
self._allowed_special = allowed_special
self._disallowed_special = disallowed_special
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
splits = []
input_ids = self._tokenizer.encode(
text,
allowed_special=self._allowed_special,
disallowed_special=self._disallowed_special,
)
start_idx = 0
cur_idx = min(start_idx + self._chunk_size, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
while start_idx < len(input_ids):
splits.append(self._tokenizer.decode(chunk_ids))
start_idx += self._chunk_size - self._chunk_overlap
cur_idx = min(start_idx + self._chunk_size, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
return splits
[docs]class RecursiveCharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
"""
def __init__(self, separators: Optional[List[str]] = None, **kwargs: Any):
"""Create a new TextSplitter."""
super().__init__(**kwargs)
self._separators = separators or ["\n\n", "\n", " ", ""]
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
final_chunks = []
# Get appropriate separator to use
separator = self._separators[-1] | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-6 | # Get appropriate separator to use
separator = self._separators[-1]
for _s in self._separators:
if _s == "":
separator = _s
break
if _s in text:
separator = _s
break
# Now that we have the separator, split the text
if separator:
splits = text.split(separator)
else:
splits = list(text)
# Now go merging things, recursively splitting longer texts.
_good_splits = []
for s in splits:
if self._length_function(s) < self._chunk_size:
_good_splits.append(s)
else:
if _good_splits:
merged_text = self._merge_splits(_good_splits, separator)
final_chunks.extend(merged_text)
_good_splits = []
other_info = self.split_text(s)
final_chunks.extend(other_info)
if _good_splits:
merged_text = self._merge_splits(_good_splits, separator)
final_chunks.extend(merged_text)
return final_chunks
[docs]class NLTKTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at sentences using NLTK."""
def __init__(self, separator: str = "\n\n", **kwargs: Any):
"""Initialize the NLTK splitter."""
super().__init__(**kwargs)
try:
from nltk.tokenize import sent_tokenize
self._tokenizer = sent_tokenize
except ImportError:
raise ImportError(
"NLTK is not installed, please install it with `pip install nltk`."
)
self._separator = separator
[docs] def split_text(self, text: str) -> List[str]: | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-7 | [docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
splits = self._tokenizer(text)
return self._merge_splits(splits, self._separator)
[docs]class SpacyTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at sentences using Spacy."""
def __init__(
self, separator: str = "\n\n", pipeline: str = "en_core_web_sm", **kwargs: Any
):
"""Initialize the spacy text splitter."""
super().__init__(**kwargs)
try:
import spacy
except ImportError:
raise ImportError(
"Spacy is not installed, please install it with `pip install spacy`."
)
self._tokenizer = spacy.load(pipeline)
self._separator = separator
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
splits = (str(s) for s in self._tokenizer(text).sents)
return self._merge_splits(splits, self._separator)
[docs]class MarkdownTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Markdown-formatted headings."""
def __init__(self, **kwargs: Any):
"""Initialize a MarkdownTextSplitter."""
separators = [
# First, try to split along Markdown headings (starting with level 2)
"\n## ",
"\n### ",
"\n#### ",
"\n##### ",
"\n###### ",
# Note the alternative syntax for headings (below) is not handled here | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-8 | # Note the alternative syntax for headings (below) is not handled here
# Heading level 2
# ---------------
# End of code block
"```\n\n",
# Horizontal lines
"\n\n***\n\n",
"\n\n---\n\n",
"\n\n___\n\n",
# Note that this splitter doesn't handle horizontal lines defined
# by *three or more* of ***, ---, or ___, but this is not handled
"\n\n",
"\n",
" ",
"",
]
super().__init__(separators=separators, **kwargs)
[docs]class LatexTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Latex-formatted layout elements."""
def __init__(self, **kwargs: Any):
"""Initialize a LatexTextSplitter."""
separators = [
# First, try to split along Latex sections
"\n\\chapter{",
"\n\\section{",
"\n\\subsection{",
"\n\\subsubsection{",
# Now split by environments
"\n\\begin{enumerate}",
"\n\\begin{itemize}",
"\n\\begin{description}",
"\n\\begin{list}",
"\n\\begin{quote}",
"\n\\begin{quotation}",
"\n\\begin{verse}",
"\n\\begin{verbatim}",
## Now split by math environments
"\n\\begin{align}",
"$$",
"$",
# Now split by the normal type of lines
" ",
"",
] | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
8ff2f9c59045-9 | # Now split by the normal type of lines
" ",
"",
]
super().__init__(separators=separators, **kwargs)
[docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Python syntax."""
def __init__(self, **kwargs: Any):
"""Initialize a MarkdownTextSplitter."""
separators = [
# First, try to split along class definitions
"\nclass ",
"\ndef ",
"\n\tdef ",
# Now split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
super().__init__(separators=separators, **kwargs)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
ae82dcd18e64-0 | Source code for langchain.vectorstores.annoy
"""Wrapper around Annoy vector database."""
from __future__ import annotations
import os
import pickle
import uuid
from configparser import ConfigParser
from pathlib import Path
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple
import numpy as np
from langchain.docstore.base import Docstore
from langchain.docstore.document import Document
from langchain.docstore.in_memory import InMemoryDocstore
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
INDEX_METRICS = frozenset(["angular", "euclidean", "manhattan", "hamming", "dot"])
DEFAULT_METRIC = "angular"
def dependable_annoy_import() -> Any:
"""Import annoy if available, otherwise raise error."""
try:
import annoy
except ImportError:
raise ValueError(
"Could not import annoy python package. "
"Please install it with `pip install --user annoy` "
)
return annoy
[docs]class Annoy(VectorStore):
"""Wrapper around Annoy vector database.
To use, you should have the ``annoy`` python package installed.
Example:
.. code-block:: python
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
"""
def __init__(
self,
embedding_function: Callable,
index: Any,
metric: str,
docstore: Docstore,
index_to_docstore_id: Dict[int, str],
):
"""Initialize with necessary components."""
self.embedding_function = embedding_function | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-1 | ):
"""Initialize with necessary components."""
self.embedding_function = embedding_function
self.index = index
self.metric = metric
self.docstore = docstore
self.index_to_docstore_id = index_to_docstore_id
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
raise NotImplementedError(
"Annoy does not allow to add new data once the index is build."
)
[docs] def process_index_results(
self, idxs: List[int], dists: List[float]
) -> List[Tuple[Document, float]]:
"""Turns annoy results into a list of documents and scores.
Args:
idxs: List of indices of the documents in the index.
dists: List of distances of the documents in the index.
Returns:
List of Documents and scores.
"""
docs = []
for idx, dist in zip(idxs, dists):
_id = self.index_to_docstore_id[idx]
doc = self.docstore.search(_id)
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
docs.append((doc, dist))
return docs
[docs] def similarity_search_with_score_by_vector(
self, embedding: List[float], k: int = 4, search_k: int = -1
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to. | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-2 | Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the query and score for each
"""
idxs, dists = self.index.get_nns_by_vector(
embedding, k, search_k=search_k, include_distances=True
)
return self.process_index_results(idxs, dists)
[docs] def similarity_search_with_score_by_index(
self, docstore_index: int, k: int = 4, search_k: int = -1
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the query and score for each
"""
idxs, dists = self.index.get_nns_by_item(
docstore_index, k, search_k=search_k, include_distances=True
)
return self.process_index_results(idxs, dists)
[docs] def similarity_search_with_score(
self, query: str, k: int = 4, search_k: int = -1
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4. | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-3 | k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the query and score for each
"""
embedding = self.embedding_function(query)
docs = self.similarity_search_with_score_by_vector(embedding, k, search_k)
return docs
[docs] def similarity_search_by_vector(
self, embedding: List[float], k: int = 4, search_k: int = -1, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the embedding.
"""
docs_and_scores = self.similarity_search_with_score_by_vector(
embedding, k, search_k
)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search_by_index(
self, docstore_index: int, k: int = 4, search_k: int = -1, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to docstore_index.
Args:
docstore_index: Index of document in docstore
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the embedding.
""" | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-4 | Returns:
List of Documents most similar to the embedding.
"""
docs_and_scores = self.similarity_search_with_score_by_index(
docstore_index, k, search_k
)
return [doc for doc, _ in docs_and_scores]
[docs] def similarity_search(
self, query: str, k: int = 4, search_k: int = -1, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
search_k: inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns:
List of Documents most similar to the query.
"""
docs_and_scores = self.similarity_search_with_score(query, k, search_k)
return [doc for doc, _ in docs_and_scores]
[docs] def max_marginal_relevance_search_by_vector(
self, embedding: List[float], k: int = 4, fetch_k: int = 20, **kwargs: Any
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents selected by maximal marginal relevance.
"""
idxs = self.index.get_nns_by_vector(
embedding, fetch_k, search_k=-1, include_distances=False
) | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-5 | embedding, fetch_k, search_k=-1, include_distances=False
)
embeddings = [self.index.get_item_vector(i) for i in idxs]
mmr_selected = maximal_marginal_relevance(
np.array([embedding], dtype=np.float32), embeddings, k=k
)
# ignore the -1's if not enough docs are returned/indexed
selected_indices = [idxs[i] for i in mmr_selected if i != -1]
docs = []
for i in selected_indices:
_id = self.index_to_docstore_id[i]
doc = self.docstore.search(_id)
if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}")
docs.append(doc)
return docs
[docs] def max_marginal_relevance_search(
self, query: str, k: int = 4, fetch_k: int = 20, **kwargs: Any
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Returns:
List of Documents selected by maximal marginal relevance.
"""
embedding = self.embedding_function(query)
docs = self.max_marginal_relevance_search_by_vector(embedding, k, fetch_k)
return docs
@classmethod
def __from(
cls,
texts: List[str],
embeddings: List[List[float]],
embedding: Embeddings, | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-6 | embeddings: List[List[float]],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
metric: str = DEFAULT_METRIC,
trees: int = 100,
n_jobs: int = -1,
**kwargs: Any,
) -> Annoy:
if metric not in INDEX_METRICS:
raise ValueError(
(
f"Unsupported distance metric: {metric}. "
f"Expected one of {list(INDEX_METRICS)}"
)
)
annoy = dependable_annoy_import()
if not embeddings:
raise ValueError("embeddings must be provided to build AnnoyIndex")
f = len(embeddings[0])
index = annoy.AnnoyIndex(f, metric=metric)
for i, emb in enumerate(embeddings):
index.add_item(i, emb)
index.build(trees, n_jobs=n_jobs)
documents = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
documents.append(Document(page_content=text, metadata=metadata))
index_to_id = {i: str(uuid.uuid4()) for i in range(len(documents))}
docstore = InMemoryDocstore(
{index_to_id[i]: doc for i, doc in enumerate(documents)}
)
return cls(embedding.embed_query, index, metric, docstore, index_to_id)
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
metric: str = DEFAULT_METRIC,
trees: int = 100, | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-7 | metric: str = DEFAULT_METRIC,
trees: int = 100,
n_jobs: int = -1,
**kwargs: Any,
) -> Annoy:
"""Construct Annoy wrapper from raw documents.
Args:
texts: List of documents to index.
embedding: Embedding function to use.
metadatas: List of metadata dictionaries to associate with documents.
metric: Metric to use for indexing. Defaults to "angular".
trees: Number of trees to use for indexing. Defaults to 100.
n_jobs: Number of jobs to use for indexing. Defaults to -1.
This is a user friendly interface that:
1. Embeds documents.
2. Creates an in memory docstore
3. Initializes the Annoy database
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings)
"""
embeddings = embedding.embed_documents(texts)
return cls.__from(
texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs
)
[docs] @classmethod
def from_embeddings(
cls,
text_embeddings: List[Tuple[str, List[float]]],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
metric: str = DEFAULT_METRIC,
trees: int = 100,
n_jobs: int = -1,
**kwargs: Any,
) -> Annoy: | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-8 | **kwargs: Any,
) -> Annoy:
"""Construct Annoy wrapper from embeddings.
Args:
text_embeddings: List of tuples of (text, embedding)
embedding: Embedding function to use.
metadatas: List of metadata dictionaries to associate with documents.
metric: Metric to use for indexing. Defaults to "angular".
trees: Number of trees to use for indexing. Defaults to 100.
n_jobs: Number of jobs to use for indexing. Defaults to -1
This is a user friendly interface that:
1. Creates an in memory docstore with provided embeddings
2. Initializes the Annoy database
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
"""
texts = [t[0] for t in text_embeddings]
embeddings = [t[1] for t in text_embeddings]
return cls.__from(
texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs
)
[docs] def save_local(self, folder_path: str, prefault: bool = False) -> None:
"""Save Annoy index, docstore, and index_to_docstore_id to disk.
Args:
folder_path: folder path to save index, docstore,
and index_to_docstore_id to.
prefault: Whether to pre-load the index into memory. | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-9 | prefault: Whether to pre-load the index into memory.
"""
path = Path(folder_path)
os.makedirs(path, exist_ok=True)
# save index, index config, docstore and index_to_docstore_id
config_object = ConfigParser()
config_object["ANNOY"] = {
"f": self.index.f,
"metric": self.metric,
}
self.index.save(str(path / "index.annoy"), prefault=prefault)
with open(path / "index.pkl", "wb") as file:
pickle.dump((self.docstore, self.index_to_docstore_id, config_object), file)
[docs] @classmethod
def load_local(
cls,
folder_path: str,
embeddings: Embeddings,
) -> Annoy:
"""Load Annoy index, docstore, and index_to_docstore_id to disk.
Args:
folder_path: folder path to load index, docstore,
and index_to_docstore_id from.
embeddings: Embeddings to use when generating queries.
"""
path = Path(folder_path)
# load index separately since it is not picklable
annoy = dependable_annoy_import()
# load docstore and index_to_docstore_id
with open(path / "index.pkl", "rb") as file:
docstore, index_to_docstore_id, config_object = pickle.load(file)
f = int(config_object["ANNOY"]["f"])
metric = config_object["ANNOY"]["metric"]
index = annoy.AnnoyIndex(f, metric=metric)
index.load(str(path / "index.annoy"))
return cls( | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
ae82dcd18e64-10 | index.load(str(path / "index.annoy"))
return cls(
embeddings.embed_query, index, metric, docstore, index_to_docstore_id
)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html |
10591ca1fea3-0 | Source code for langchain.vectorstores.opensearch_vector_search
"""Wrapper around OpenSearch vector database."""
from __future__ import annotations
import uuid
from typing import Any, Dict, Iterable, List, Optional
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
IMPORT_OPENSEARCH_PY_ERROR = (
"Could not import OpenSearch. Please install it with `pip install opensearch-py`."
)
SCRIPT_SCORING_SEARCH = "script_scoring"
PAINLESS_SCRIPTING_SEARCH = "painless_scripting"
MATCH_ALL_QUERY = {"match_all": {}} # type: Dict
def _import_opensearch() -> Any:
"""Import OpenSearch if available, otherwise raise error."""
try:
from opensearchpy import OpenSearch
except ImportError:
raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)
return OpenSearch
def _import_bulk() -> Any:
"""Import bulk if available, otherwise raise error."""
try:
from opensearchpy.helpers import bulk
except ImportError:
raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)
return bulk
def _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:
"""Get OpenSearch client from the opensearch_url, otherwise raise error."""
try:
opensearch = _import_opensearch()
client = opensearch(opensearch_url, **kwargs)
except ValueError as e:
raise ValueError(
f"OpenSearch client string provided is not in proper format. "
f"Got error: {e} "
)
return client | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-1 | f"Got error: {e} "
)
return client
def _validate_embeddings_and_bulk_size(embeddings_length: int, bulk_size: int) -> None:
"""Validate Embeddings Length and Bulk Size."""
if embeddings_length == 0:
raise RuntimeError("Embeddings size is zero")
if bulk_size < embeddings_length:
raise RuntimeError(
f"The embeddings count, {embeddings_length} is more than the "
f"[bulk_size], {bulk_size}. Increase the value of [bulk_size]."
)
def _bulk_ingest_embeddings(
client: Any,
index_name: str,
embeddings: List[List[float]],
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
vector_field: str = "vector_field",
text_field: str = "text",
) -> List[str]:
"""Bulk Ingest Embeddings into given index."""
bulk = _import_bulk()
requests = []
ids = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
_id = str(uuid.uuid4())
request = {
"_op_type": "index",
"_index": index_name,
vector_field: embeddings[i],
text_field: text,
"metadata": metadata,
"_id": _id,
}
requests.append(request)
ids.append(_id)
bulk(client, requests)
client.indices.refresh(index=index_name)
return ids
def _default_scripting_text_mapping(
dim: int,
vector_field: str = "vector_field",
) -> Dict: | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-2 | vector_field: str = "vector_field",
) -> Dict:
"""For Painless Scripting or Script Scoring,the default mapping to create index."""
return {
"mappings": {
"properties": {
vector_field: {"type": "knn_vector", "dimension": dim},
}
}
}
def _default_text_mapping(
dim: int,
engine: str = "nmslib",
space_type: str = "l2",
ef_search: int = 512,
ef_construction: int = 512,
m: int = 16,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, this is the default mapping to create index."""
return {
"settings": {"index": {"knn": True, "knn.algo_param.ef_search": ef_search}},
"mappings": {
"properties": {
vector_field: {
"type": "knn_vector",
"dimension": dim,
"method": {
"name": "hnsw",
"space_type": space_type,
"engine": engine,
"parameters": {"ef_construction": ef_construction, "m": m},
},
}
}
},
}
def _default_approximate_search_query(
query_vector: List[float],
size: int = 4,
k: int = 4,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, this is the default query."""
return {
"size": size, | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-3 | return {
"size": size,
"query": {"knn": {vector_field: {"vector": query_vector, "k": k}}},
}
def _approximate_search_query_with_boolean_filter(
query_vector: List[float],
boolean_filter: Dict,
size: int = 4,
k: int = 4,
vector_field: str = "vector_field",
subquery_clause: str = "must",
) -> Dict:
"""For Approximate k-NN Search, with Boolean Filter."""
return {
"size": size,
"query": {
"bool": {
"filter": boolean_filter,
subquery_clause: [
{"knn": {vector_field: {"vector": query_vector, "k": k}}}
],
}
},
}
def _approximate_search_query_with_lucene_filter(
query_vector: List[float],
lucene_filter: Dict,
size: int = 4,
k: int = 4,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, with Lucene Filter."""
search_query = _default_approximate_search_query(
query_vector, size, k, vector_field
)
search_query["query"]["knn"][vector_field]["filter"] = lucene_filter
return search_query
def _default_script_query(
query_vector: List[float],
space_type: str = "l2",
pre_filter: Dict = MATCH_ALL_QUERY,
vector_field: str = "vector_field",
) -> Dict:
"""For Script Scoring Search, this is the default query."""
return { | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-4 | """For Script Scoring Search, this is the default query."""
return {
"query": {
"script_score": {
"query": pre_filter,
"script": {
"source": "knn_score",
"lang": "knn",
"params": {
"field": vector_field,
"query_value": query_vector,
"space_type": space_type,
},
},
}
}
}
def __get_painless_scripting_source(
space_type: str, query_vector: List[float], vector_field: str = "vector_field"
) -> str:
"""For Painless Scripting, it returns the script source based on space type."""
source_value = (
"(1.0 + "
+ space_type
+ "("
+ str(query_vector)
+ ", doc['"
+ vector_field
+ "']))"
)
if space_type == "cosineSimilarity":
return source_value
else:
return "1/" + source_value
def _default_painless_scripting_query(
query_vector: List[float],
space_type: str = "l2Squared",
pre_filter: Dict = MATCH_ALL_QUERY,
vector_field: str = "vector_field",
) -> Dict:
"""For Painless Scripting Search, this is the default query."""
source = __get_painless_scripting_source(space_type, query_vector)
return {
"query": {
"script_score": {
"query": pre_filter,
"script": {
"source": source,
"params": {
"field": vector_field,
"query_value": query_vector, | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-5 | "field": vector_field,
"query_value": query_vector,
},
},
}
}
}
def _get_kwargs_value(kwargs: Any, key: str, default_value: Any) -> Any:
"""Get the value of the key if present. Else get the default_value."""
if key in kwargs:
return kwargs.get(key)
return default_value
[docs]class OpenSearchVectorSearch(VectorStore):
"""Wrapper around OpenSearch as a vector database.
Example:
.. code-block:: python
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
"""
def __init__(
self,
opensearch_url: str,
index_name: str,
embedding_function: Embeddings,
**kwargs: Any,
):
"""Initialize with necessary components."""
self.embedding_function = embedding_function
self.index_name = index_name
self.client = _get_opensearch_client(opensearch_url, **kwargs)
[docs] def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
bulk_size: int = 500,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
bulk_size: Bulk API request count; Default: 500
Returns: | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-6 | bulk_size: Bulk API request count; Default: 500
Returns:
List of ids from adding the texts into the vectorstore.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
"""
embeddings = self.embedding_function.embed_documents(list(texts))
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
return _bulk_ingest_embeddings(
self.client,
self.index_name,
embeddings,
texts,
metadatas,
vector_field,
text_field,
)
[docs] def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
metadata_field: Document field that metadata is stored in. Defaults to
"metadata".
Can be set to a special value "*" to include the entire document. | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-7 | "metadata".
Can be set to a special value "*" to include the entire document.
Optional Args for Approximate Search:
search_type: "approximate_search"; default: "approximate_search"
size: number of results the query actually returns; default: 4
boolean_filter: A Boolean filter consists of a Boolean query that
contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: "must"
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified
post-filtering.
Optional Args for Script Scoring Search:
search_type: "script_scoring"; default: "approximate_search"
space_type: "l2", "l1", "linf", "cosinesimil", "innerproduct",
"hammingbit"; default: "l2"
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {"match_all": {}}
Optional Args for Painless Scripting Search:
search_type: "painless_scripting"; default: "approximate_search"
space_type: "l2Squared", "l1Norm", "cosineSimilarity"; default: "l2Squared"
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {"match_all": {}}
"""
embedding = self.embedding_function.embed_query(query)
search_type = _get_kwargs_value(kwargs, "search_type", "approximate_search")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
metadata_field = _get_kwargs_value(kwargs, "metadata_field", "metadata") | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-8 | metadata_field = _get_kwargs_value(kwargs, "metadata_field", "metadata")
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
if search_type == "approximate_search":
size = _get_kwargs_value(kwargs, "size", 4)
boolean_filter = _get_kwargs_value(kwargs, "boolean_filter", {})
subquery_clause = _get_kwargs_value(kwargs, "subquery_clause", "must")
lucene_filter = _get_kwargs_value(kwargs, "lucene_filter", {})
if boolean_filter != {} and lucene_filter != {}:
raise ValueError(
"Both `boolean_filter` and `lucene_filter` are provided which "
"is invalid"
)
if boolean_filter != {}:
search_query = _approximate_search_query_with_boolean_filter(
embedding, boolean_filter, size, k, vector_field, subquery_clause
)
elif lucene_filter != {}:
search_query = _approximate_search_query_with_lucene_filter(
embedding, lucene_filter, size, k, vector_field
)
else:
search_query = _default_approximate_search_query(
embedding, size, k, vector_field
)
elif search_type == SCRIPT_SCORING_SEARCH:
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY)
search_query = _default_script_query(
embedding, space_type, pre_filter, vector_field
)
elif search_type == PAINLESS_SCRIPTING_SEARCH:
space_type = _get_kwargs_value(kwargs, "space_type", "l2Squared")
pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY) | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-9 | pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY)
search_query = _default_painless_scripting_query(
embedding, space_type, pre_filter, vector_field
)
else:
raise ValueError("Invalid `search_type` provided as an argument")
response = self.client.search(index=self.index_name, body=search_query)
hits = [hit["_source"] for hit in response["hits"]["hits"][:k]]
documents = [
Document(
page_content=hit[text_field],
metadata=hit
if metadata_field == "*" or metadata_field not in hit
else hit[metadata_field],
)
for hit in hits
]
return documents
[docs] @classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
bulk_size: int = 500,
**kwargs: Any,
) -> OpenSearchVectorSearch:
"""Construct OpenSearchVectorSearch wrapper from raw documents.
Example:
.. code-block:: python
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field". | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-10 | vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
Optional Keyword Args for Approximate Search:
engine: "nmslib", "faiss", "lucene"; default: "nmslib"
space_type: "l2", "l1", "cosinesimil", "linf", "innerproduct"; default: "l2"
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:
is_appx_search: False
"""
opensearch_url = get_from_dict_or_env(
kwargs, "opensearch_url", "OPENSEARCH_URL"
)
# List of arguments that needs to be removed from kwargs
# before passing kwargs to get opensearch client
keys_list = [
"opensearch_url",
"index_name",
"is_appx_search",
"vector_field",
"text_field",
"engine",
"space_type",
"ef_search",
"ef_construction",
"m",
]
embeddings = embedding.embed_documents(texts)
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size) | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-11 | _validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
dim = len(embeddings[0])
# Get the index name from either from kwargs or ENV Variable
# before falling back to random generation
index_name = get_from_dict_or_env(
kwargs, "index_name", "OPENSEARCH_INDEX_NAME", default=uuid.uuid4().hex
)
is_appx_search = _get_kwargs_value(kwargs, "is_appx_search", True)
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
if is_appx_search:
engine = _get_kwargs_value(kwargs, "engine", "nmslib")
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
ef_search = _get_kwargs_value(kwargs, "ef_search", 512)
ef_construction = _get_kwargs_value(kwargs, "ef_construction", 512)
m = _get_kwargs_value(kwargs, "m", 16)
mapping = _default_text_mapping(
dim, engine, space_type, ef_search, ef_construction, m, vector_field
)
else:
mapping = _default_scripting_text_mapping(dim)
[kwargs.pop(key, None) for key in keys_list]
client = _get_opensearch_client(opensearch_url, **kwargs)
client.indices.create(index=index_name, body=mapping)
_bulk_ingest_embeddings(
client, index_name, embeddings, texts, metadatas, vector_field, text_field
)
return cls(opensearch_url, index_name, embedding)
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
10591ca1fea3-12 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html |
4fd0f44fb656-0 | Source code for langchain.vectorstores.faiss
"""Wrapper around FAISS vector database."""
from __future__ import annotations
import math
import pickle
import uuid
from pathlib import Path
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple
import numpy as np
from langchain.docstore.base import AddableMixin, Docstore
from langchain.docstore.document import Document
from langchain.docstore.in_memory import InMemoryDocstore
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
def dependable_faiss_import() -> Any:
"""Import faiss if available, otherwise raise error."""
try:
import faiss
except ImportError:
raise ValueError(
"Could not import faiss python package. "
"Please install it with `pip install faiss` "
"or `pip install faiss-cpu` (depending on Python version)."
)
return faiss
def _default_relevance_score_fn(score: float) -> float:
"""Return a similarity score on a scale [0, 1]."""
# The 'correct' relevance function
# may differ depending on a few things, including:
# - the distance / similarity metric used by the VectorStore
# - the scale of your embeddings (OpenAI's are unit normed. Many others are not!)
# - embedding dimensionality
# - etc.
# This function converts the euclidean norm of normalized embeddings
# (0 is most similar, sqrt(2) most dissimilar)
# to a similarity function (0 to 1)
return 1.0 - score / math.sqrt(2)
[docs]class FAISS(VectorStore): | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html |
4fd0f44fb656-1 | [docs]class FAISS(VectorStore):
"""Wrapper around FAISS vector database.
To use, you should have the ``faiss`` python package installed.
Example:
.. code-block:: python
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
"""
def __init__(
self,
embedding_function: Callable,
index: Any,
docstore: Docstore,
index_to_docstore_id: Dict[int, str],
relevance_score_fn: Optional[
Callable[[float], float]
] = _default_relevance_score_fn,
):
"""Initialize with necessary components."""
self.embedding_function = embedding_function
self.index = index
self.docstore = docstore
self.index_to_docstore_id = index_to_docstore_id
self.relevance_score_fn = relevance_score_fn
def __add(
self,
texts: Iterable[str],
embeddings: Iterable[List[float]],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
if not isinstance(self.docstore, AddableMixin):
raise ValueError(
"If trying to add texts, the underlying docstore should support "
f"adding items, which {self.docstore} does not"
)
documents = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
documents.append(Document(page_content=text, metadata=metadata))
# Add to the index, the index_to_id mapping, and the docstore.
starting_len = len(self.index_to_docstore_id) | https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 15