id
stringlengths 14
16
| text
stringlengths 20
3.3k
| source
stringlengths 60
181
|
---|---|---|
cff7693cc023-5 | of causal relationships within a given context.
Classes¶
cpal.base.CPALChain
Causal program-aided language (CPAL) chain implementation.
cpal.base.CausalChain
Translate the causal narrative into a stack of operations.
cpal.base.InterventionChain
Set the hypothetical conditions for the causal model.
cpal.base.NarrativeChain
Decompose the narrative into its story elements.
cpal.base.QueryChain
Query the outcome table using SQL.
cpal.constants.Constant(value)
Enum for constants used in the CPAL.
cpal.models.CausalModel
Casual data.
cpal.models.EntityModel
Entity in the story.
cpal.models.EntitySettingModel
Entity initial conditions.
cpal.models.InterventionModel
Intervention data of the story aka initial conditions.
cpal.models.NarrativeModel
Narrative input as three story elements.
cpal.models.QueryModel
Query data of the story.
cpal.models.ResultModel
Result of the story query.
cpal.models.StoryModel
Story data.
cpal.models.SystemSettingModel
System initial conditions.
langchain_experimental.data_anonymizer¶
Data anonymizer contains both Anonymizers and Deanonymizers.
It uses the [Microsoft Presidio](https://microsoft.github.io/presidio/) library.
Anonymizers are used to replace a Personally Identifiable Information (PII)
entity text with some other
value by applying a certain operator (e.g. replace, mask, redact, encrypt).
Deanonymizers are used to revert the anonymization operation
(e.g. to decrypt an encrypted text).
Classes¶
data_anonymizer.base.AnonymizerBase()
Base abstract class for anonymizers.
data_anonymizer.base.ReversibleAnonymizerBase()
Base abstract class for reversible anonymizers. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-6 | Base abstract class for reversible anonymizers.
data_anonymizer.deanonymizer_mapping.DeanonymizerMapping(...)
Deanonymizer mapping.
data_anonymizer.presidio.PresidioAnonymizer([...])
Anonymizer using Microsoft Presidio.
data_anonymizer.presidio.PresidioAnonymizerBase([...])
Base Anonymizer using Microsoft Presidio.
data_anonymizer.presidio.PresidioReversibleAnonymizer([...])
Reversible Anonymizer using Microsoft Presidio.
Functions¶
data_anonymizer.deanonymizer_mapping.create_anonymizer_mapping(...)
Create or update the mapping used to anonymize and/or
data_anonymizer.deanonymizer_mapping.format_duplicated_operator(...)
Format the operator name with the count.
data_anonymizer.deanonymizer_matching_strategies.case_insensitive_matching_strategy(...)
Case insensitive matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.combined_exact_fuzzy_matching_strategy(...)
Combined exact and fuzzy matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.exact_matching_strategy(...)
Exact matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.fuzzy_matching_strategy(...)
Fuzzy matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.ngram_fuzzy_matching_strategy(...)
N-gram fuzzy matching strategy for deanonymization.
data_anonymizer.faker_presidio_mapping.get_pseudoanonymizer_mapping([seed])
Get a mapping of entities to pseudo anonymize them.
langchain_experimental.fallacy_removal¶
Fallacy Removal Chain runs a self-review of logical fallacies
as determined by paper
[Robust and Explainable Identification of Logical Fallacies in Natural | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-7 | as determined by paper
[Robust and Explainable Identification of Logical Fallacies in Natural
Language Arguments](https://arxiv.org/pdf/2212.07425.pdf).
It is modeled after Constitutional AI and in the same format, but applying logical
fallacies as generalized rules to remove them in output.
Classes¶
fallacy_removal.base.FallacyChain
Chain for applying logical fallacy evaluations.
fallacy_removal.models.LogicalFallacy
Logical fallacy.
langchain_experimental.generative_agents¶
Generative Agent primitives.
Classes¶
generative_agents.generative_agent.GenerativeAgent
Agent as a character with memory and innate characteristics.
generative_agents.memory.GenerativeAgentMemory
Memory for the generative agent.
langchain_experimental.graph_transformers¶
Graph Transformers transform Documents into Graph Documents.
Classes¶
graph_transformers.diffbot.DiffbotGraphTransformer([...])
Transform documents into graph documents using Diffbot NLP API.
graph_transformers.diffbot.NodesList()
List of nodes with associated properties.
graph_transformers.diffbot.SimplifiedSchema()
Simplified schema mapping.
graph_transformers.llm.LLMGraphTransformer(llm)
Transform documents into graph-based documents using a LLM.
Functions¶
graph_transformers.diffbot.format_property_key(s)
Formats a string to be used as a property key.
graph_transformers.llm.create_simple_model([...])
Simple model allows to limit node and/or relationship types.
graph_transformers.llm.map_to_base_node(node)
Map the SimpleNode to the base Node.
graph_transformers.llm.map_to_base_relationship(rel)
Map the SimpleRelationship to the base Relationship.
graph_transformers.llm.optional_enum_field([...])
Utility function to conditionally create a field with an enum constraint.
langchain_experimental.llm_bash¶ | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-8 | langchain_experimental.llm_bash¶
LLM bash is a chain that uses LLM to interpret a prompt and
executes bash code.
Classes¶
llm_bash.base.LLMBashChain
Chain that interprets a prompt and executes bash operations.
llm_bash.bash.BashProcess([strip_newlines, ...])
Wrapper for starting subprocesses.
llm_bash.prompt.BashOutputParser
Parser for bash output.
langchain_experimental.llm_symbolic_math¶
Chain that interprets a prompt and executes python code to do math.
Heavily borrowed from llm_math, uses the [SymPy](https://www.sympy.org/) package.
Classes¶
llm_symbolic_math.base.LLMSymbolicMathChain
Chain that interprets a prompt and executes python code to do symbolic math.
langchain_experimental.llms¶
Experimental LLM classes provide
access to the large language model (LLM) APIs and services.
Classes¶
llms.anthropic_functions.AnthropicFunctions
[Deprecated] Chat model for interacting with Anthropic functions.
llms.anthropic_functions.TagParser()
Parser for the tool tags.
llms.jsonformer_decoder.JsonFormer
Jsonformer wrapped LLM using HuggingFace Pipeline API.
llms.llamaapi.ChatLlamaAPI
Chat model using the Llama API.
llms.lmformatenforcer_decoder.LMFormatEnforcer
LMFormatEnforcer wrapped LLM using HuggingFace Pipeline API.
llms.ollama_functions.OllamaFunctions
Function chat model that uses Ollama API.
llms.rellm_decoder.RELLM
RELLM wrapped LLM using HuggingFace Pipeline API.
Functions¶
llms.jsonformer_decoder.import_jsonformer()
Lazily import of the jsonformer package. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-9 | Lazily import of the jsonformer package.
llms.lmformatenforcer_decoder.import_lmformatenforcer()
Lazily import of the lmformatenforcer package.
llms.rellm_decoder.import_rellm()
Lazily import of the rellm package.
langchain_experimental.open_clip¶
OpenCLIP Embeddings model.
OpenCLIP is a multimodal model that can encode text and images into a shared space.
See this paper for more details: https://arxiv.org/abs/2103.00020
and [this repository](https://github.com/mlfoundations/open_clip) for details.
Classes¶
open_clip.open_clip.OpenCLIPEmbeddings
OpenCLIP Embeddings model.
langchain_experimental.pal_chain¶
PAL Chain implements Program-Aided Language Models.
See the paper: https://arxiv.org/pdf/2211.10435.pdf.
This chain is vulnerable to [arbitrary code execution](https://github.com/langchain-ai/langchain/issues/5872).
Classes¶
pal_chain.base.PALChain
Chain that implements Program-Aided Language Models (PAL).
pal_chain.base.PALValidation([...])
Validation for PAL generated code.
langchain_experimental.plan_and_execute¶
Plan-and-execute agents are planning tasks with a language model (LLM) and
executing them with a separate agent.
Classes¶
plan_and_execute.agent_executor.PlanAndExecute
Plan and execute a chain of steps.
plan_and_execute.executors.base.BaseExecutor
Base executor.
plan_and_execute.executors.base.ChainExecutor
Chain executor.
plan_and_execute.planners.base.BasePlanner
Base planner.
plan_and_execute.planners.base.LLMPlanner
LLM planner.
plan_and_execute.planners.chat_planner.PlanningOutputParser
Planning output parser. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-10 | plan_and_execute.planners.chat_planner.PlanningOutputParser
Planning output parser.
plan_and_execute.schema.BaseStepContainer
Base step container.
plan_and_execute.schema.ListStepContainer
Container for List of steps.
plan_and_execute.schema.Plan
Plan.
plan_and_execute.schema.PlanOutputParser
Plan output parser.
plan_and_execute.schema.Step
Step.
plan_and_execute.schema.StepResponse
Step response.
Functions¶
plan_and_execute.executors.agent_executor.load_agent_executor(...)
Load an agent executor.
plan_and_execute.planners.chat_planner.load_chat_planner(llm)
Load a chat planner.
langchain_experimental.prompt_injection_identifier¶
HuggingFace Injection Identifier is a tool that uses
[HuggingFace Prompt Injection model](https://huggingface.co/deepset/deberta-v3-base-injection)
to detect prompt injection attacks.
Classes¶
prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier
Tool that uses HuggingFace Prompt Injection model to detect prompt injection attacks.
prompt_injection_identifier.hugging_face_identifier.PromptInjectionException([...])
Exception raised when prompt injection attack is detected.
langchain_experimental.recommenders¶
Amazon Personalize primitives.
[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html)
is a fully managed machine learning service that uses your data to generate
item recommendations for your users.
Classes¶
recommenders.amazon_personalize.AmazonPersonalize([...])
Amazon Personalize Runtime wrapper for executing real-time operations.
recommenders.amazon_personalize_chain.AmazonPersonalizeChain
Chain for retrieving recommendations from Amazon Personalize,
langchain_experimental.retrievers¶
Retriever class returns Documents given a text query. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-11 | Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to
store documents, only to return (or retrieve) it.
Classes¶
retrievers.vector_sql_database.VectorSQLDatabaseChainRetriever
Retriever that uses Vector SQL Database.
langchain_experimental.rl_chain¶
RL (Reinforcement Learning) Chain leverages the Vowpal Wabbit (VW) models
for reinforcement learning with a context, with the goal of modifying
the prompt before the LLM call.
[Vowpal Wabbit](https://vowpalwabbit.org/) provides fast, efficient,
and flexible online machine learning techniques for reinforcement learning,
supervised learning, and more.
Classes¶
rl_chain.base.AutoSelectionScorer
Auto selection scorer.
rl_chain.base.Embedder(*args, **kwargs)
Abstract class to represent an embedder.
rl_chain.base.Event(inputs[, selected])
Abstract class to represent an event.
rl_chain.base.Policy(**kwargs)
Abstract class to represent a policy.
rl_chain.base.RLChain
Chain that leverages the Vowpal Wabbit (VW) model as a learned policy for reinforcement learning.
rl_chain.base.Selected()
Abstract class to represent the selected item.
rl_chain.base.SelectionScorer
Abstract class to grade the chosen selection or the response of the llm.
rl_chain.base.VwPolicy(model_repo, vw_cmd, ...)
Vowpal Wabbit policy.
rl_chain.metrics.MetricsTrackerAverage(step)
Metrics Tracker Average.
rl_chain.metrics.MetricsTrackerRollingWindow(...)
Metrics Tracker Rolling Window.
rl_chain.model_repository.ModelRepository(folder)
Model Repository.
rl_chain.pick_best_chain.PickBest | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-12 | Model Repository.
rl_chain.pick_best_chain.PickBest
Chain that leverages the Vowpal Wabbit (VW) model for reinforcement learning with a context, with the goal of modifying the prompt before the LLM call.
rl_chain.pick_best_chain.PickBestEvent(...)
Event class for PickBest chain.
rl_chain.pick_best_chain.PickBestFeatureEmbedder(...)
Embed the BasedOn and ToSelectFrom inputs into a format that can be used by the learning policy.
rl_chain.pick_best_chain.PickBestRandomPolicy(...)
Random policy for PickBest chain.
rl_chain.pick_best_chain.PickBestSelected([...])
Selected class for PickBest chain.
rl_chain.vw_logger.VwLogger(path)
Vowpal Wabbit custom logger.
Functions¶
rl_chain.base.BasedOn(anything)
Wrap a value to indicate that it should be based on.
rl_chain.base.Embed(anything[, keep])
Wrap a value to indicate that it should be embedded.
rl_chain.base.EmbedAndKeep(anything)
Wrap a value to indicate that it should be embedded and kept.
rl_chain.base.ToSelectFrom(anything)
Wrap a value to indicate that it should be selected from.
rl_chain.base.embed(to_embed, model[, namespace])
Embed the actions or context using the SentenceTransformer model (or a model that has an encode function).
rl_chain.base.embed_dict_type(item, model)
Embed a dictionary item.
rl_chain.base.embed_list_type(item, model[, ...])
Embed a list item.
rl_chain.base.embed_string_type(item, model)
Embed a string or an _Embed object.
rl_chain.base.get_based_on_and_to_select_from(inputs)
Get the BasedOn and ToSelectFrom from the inputs.
rl_chain.base.is_stringtype_instance(item)
Check if an item is a string. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-13 | rl_chain.base.is_stringtype_instance(item)
Check if an item is a string.
rl_chain.base.parse_lines(parser, input_str)
Parse the input string into a list of examples.
rl_chain.base.prepare_inputs_for_autoembed(inputs)
Prepare the inputs for auto embedding.
rl_chain.base.stringify_embedding(embedding)
Convert an embedding to a string.
langchain_experimental.smart_llm¶
SmartGPT chain is applying self-critique using the SmartGPT workflow.
See details at https://youtu.be/wVzuvf9D9BU
The workflow performs these 3 steps:
1. Ideate: Pass the user prompt to an Ideation LLM n_ideas times,
each result is an “idea”
Critique: Pass the ideas to a Critique LLM which looks for flaws in the ideas
& picks the best one
Resolve: Pass the critique to a Resolver LLM which improves upon the best idea
& outputs only the (improved version of) the best output
In total, the SmartGPT workflow will use n_ideas+2 LLM calls
Note that SmartLLMChain will only improve results (compared to a basic LLMChain),
when the underlying models have the capability for reflection, which smaller models
often don’t.
Finally, a SmartLLMChain assumes that each underlying LLM outputs exactly 1 result.
Classes¶
smart_llm.base.SmartLLMChain
Chain for applying self-critique using the SmartGPT workflow.
langchain_experimental.sql¶
SQL Chain interacts with SQL Database.
Classes¶
sql.base.SQLDatabaseChain
Chain for interacting with SQL Database.
sql.base.SQLDatabaseSequentialChain
Chain for querying SQL database that is a sequential chain.
sql.vector_sql.VectorSQLDatabaseChain
Chain for interacting with Vector SQL Database. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-14 | sql.vector_sql.VectorSQLDatabaseChain
Chain for interacting with Vector SQL Database.
sql.vector_sql.VectorSQLOutputParser
Output Parser for Vector SQL.
sql.vector_sql.VectorSQLRetrieveAllOutputParser
Parser based on VectorSQLOutputParser.
Functions¶
sql.vector_sql.get_result_from_sqldb(db, cmd)
Get result from SQL Database.
langchain_experimental.tabular_synthetic_data¶
Generate tabular synthetic data using LLM and few-shot template.
Classes¶
tabular_synthetic_data.base.SyntheticDataGenerator
Generate synthetic data using the given LLM and few-shot template.
Functions¶
tabular_synthetic_data.openai.create_openai_data_generator(...)
Create an instance of SyntheticDataGenerator tailored for OpenAI models.
langchain_experimental.text_splitter¶
Experimental text splitter based on semantic similarity.
Classes¶
text_splitter.SemanticChunker(embeddings[, ...])
Split the text based on semantic similarity.
Functions¶
text_splitter.calculate_cosine_distances(...)
Calculate cosine distances between sentences.
text_splitter.combine_sentences(sentences[, ...])
Combine sentences based on buffer size.
langchain_experimental.tools¶
Experimental Python REPL tools.
Classes¶
tools.python.tool.PythonAstREPLTool
Tool for running python code in a REPL.
tools.python.tool.PythonInputs
Python inputs.
tools.python.tool.PythonREPLTool
Tool for running python code in a REPL.
Functions¶
tools.python.tool.sanitize_input(query)
Sanitize input to the python REPL.
langchain_experimental.tot¶
Implementation of a Tree of Thought (ToT) chain based on the paper
[Large Language Model Guided Tree-of-Thought](https://arxiv.org/pdf/2305.08291.pdf). | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-15 | The Tree of Thought (ToT) chain uses a tree structure to explore the space of
possible solutions to a problem.
Classes¶
tot.base.ToTChain
Chain implementing the Tree of Thought (ToT).
tot.checker.ToTChecker
Tree of Thought (ToT) checker.
tot.controller.ToTController([c])
Tree of Thought (ToT) controller.
tot.memory.ToTDFSMemory([stack])
Memory for the Tree of Thought (ToT) chain.
tot.prompts.CheckerOutputParser
Parse and check the output of the language model.
tot.prompts.JSONListOutputParser
Parse the output of a PROPOSE_PROMPT response.
tot.thought.Thought
A thought in the ToT.
tot.thought.ThoughtValidity(value)
Enum for the validity of a thought.
tot.thought_generation.BaseThoughtGenerationStrategy
Base class for a thought generation strategy.
tot.thought_generation.ProposePromptStrategy
Strategy that is sequentially using a "propose prompt".
tot.thought_generation.SampleCoTStrategy
Sample strategy from a Chain-of-Thought (CoT) prompt.
Functions¶
tot.prompts.get_cot_prompt()
Get the prompt for the Chain of Thought (CoT) chain.
tot.prompts.get_propose_prompt()
Get the prompt for the PROPOSE_PROMPT chain.
langchain_experimental.utilities¶
Utility that simulates a standalone Python REPL.
Classes¶
utilities.python.PythonREPL
Simulates a standalone Python REPL.
langchain_experimental.video_captioning¶
Classes¶
video_captioning.base.VideoCaptioningChain
Video Captioning Chain.
video_captioning.models.AudioModel(...)
video_captioning.models.BaseModel(...)
video_captioning.models.CaptionModel(...)
video_captioning.models.VideoModel(...)
video_captioning.services.audio_service.AudioProcessor(api_key) | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
cff7693cc023-16 | video_captioning.models.VideoModel(...)
video_captioning.services.audio_service.AudioProcessor(api_key)
video_captioning.services.caption_service.CaptionProcessor(llm)
video_captioning.services.combine_service.CombineProcessor(llm)
video_captioning.services.image_service.ImageProcessor([...])
video_captioning.services.srt_service.SRTProcessor() | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
e9b5ca49297f-0 | langchain_groq 0.1.2¶
langchain_groq.chat_models¶
Groq Chat wrapper.
Classes¶
chat_models.ChatGroq
Groq Chat large language models API. | https://api.python.langchain.com/en/latest/groq_api_reference.html |
0e6281a518d4-0 | langchain_openai 0.1.3¶
langchain_openai.chat_models¶
Classes¶
chat_models.azure.AzureChatOpenAI
Azure OpenAI Chat Completion API.
chat_models.base.ChatOpenAI
OpenAI Chat large language models API.
langchain_openai.embeddings¶
Classes¶
embeddings.azure.AzureOpenAIEmbeddings
Azure OpenAI Embeddings API.
embeddings.base.OpenAIEmbeddings
OpenAI embedding models.
langchain_openai.llms¶
Classes¶
llms.azure.AzureOpenAI
Azure-specific OpenAI large language models.
llms.base.BaseOpenAI
Base OpenAI large language model class.
llms.base.OpenAI
OpenAI large language models. | https://api.python.langchain.com/en/latest/openai_api_reference.html |
f792fdcf6601-0 | langchain_ibm 0.1.3¶ | https://api.python.langchain.com/en/latest/ibm_api_reference.html |
130f3a93922b-0 | langchain_fireworks 0.1.2¶
langchain_fireworks.chat_models¶
Fireworks chat wrapper.
Classes¶
chat_models.ChatFireworks
Fireworks Chat large language models API.
langchain_fireworks.embeddings¶
Classes¶
embeddings.FireworksEmbeddings
FireworksEmbeddings embedding model.
langchain_fireworks.llms¶
Wrapper around Fireworks AI’s Completion API.
Classes¶
llms.Fireworks
LLM models from Fireworks. | https://api.python.langchain.com/en/latest/fireworks_api_reference.html |
d0e93f460bdd-0 | langchain_core 0.1.42¶
langchain_core.agents¶
Agent is a class that uses an LLM to choose a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents,
a language model is used as a reasoning engine to determine which actions
to take and in which order.
Agents select and use Tools and Toolkits for actions.
Class hierarchy:
BaseSingleActionAgent --> LLMSingleActionAgent
OpenAIFunctionsAgent
XMLAgent
Agent --> <name>Agent # Examples: ZeroShotAgent, ChatAgent
BaseMultiActionAgent --> OpenAIMultiFunctionsAgent
Main helpers:
AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish, AgentStep
Classes¶
agents.AgentAction
A full description of an action for an ActionAgent to execute.
agents.AgentActionMessageLog
Override init to support instantiation by position for backward compat.
agents.AgentFinish
The final return value of an ActionAgent.
agents.AgentStep
The result of running an AgentAction.
langchain_core.beta¶
Some beta features that are not yet ready for production.
Classes¶
beta.runnables.context.Context()
Context for a runnable.
beta.runnables.context.ContextGet
[Beta] Get a context value.
beta.runnables.context.ContextSet
[Beta] Set a context value.
beta.runnables.context.PrefixContext([prefix])
Context for a runnable with a prefix.
Functions¶
beta.runnables.context.aconfig_with_context(...)
Asynchronously patch a runnable config with context getters and setters.
beta.runnables.context.config_with_context(...)
Patch a runnable config with context getters and setters.
langchain_core.caches¶
Warning
Beta Feature!
Cache provides an optional caching layer for LLMs. | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-1 | Warning
Beta Feature!
Cache provides an optional caching layer for LLMs.
Cache is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM
provider if you’re often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make
to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
Class hierarchy:
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
Classes¶
caches.BaseCache()
This interfaces provides a caching layer for LLMs and Chat models.
langchain_core.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
callbacks.base.AsyncCallbackHandler()
Async callback handler that handles callbacks from LangChain.
callbacks.base.BaseCallbackHandler()
Base callback handler that handles callbacks from LangChain.
callbacks.base.BaseCallbackManager(handlers)
Base callback manager that handles callbacks from LangChain.
callbacks.base.CallbackManagerMixin()
Mixin for callback manager.
callbacks.base.ChainManagerMixin()
Mixin for chain callbacks.
callbacks.base.LLMManagerMixin()
Mixin for LLM callbacks.
callbacks.base.RetrieverManagerMixin()
Mixin for Retriever callbacks.
callbacks.base.RunManagerMixin()
Mixin for run manager.
callbacks.base.ToolManagerMixin()
Mixin for tool callbacks.
callbacks.manager.AsyncCallbackManager(handlers)
Async callback manager that handles callbacks from LangChain.
callbacks.manager.AsyncCallbackManagerForChainGroup(...)
Async callback manager for the chain group.
callbacks.manager.AsyncCallbackManagerForChainRun(*, ...)
Async callback manager for chain run. | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-2 | Async callback manager for chain run.
callbacks.manager.AsyncCallbackManagerForLLMRun(*, ...)
Async callback manager for LLM run.
callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, ...)
Async callback manager for retriever run.
callbacks.manager.AsyncCallbackManagerForToolRun(*, ...)
Async callback manager for tool run.
callbacks.manager.AsyncParentRunManager(*, ...)
Async Parent Run Manager.
callbacks.manager.AsyncRunManager(*, run_id, ...)
Async Run Manager.
callbacks.manager.BaseRunManager(*, run_id, ...)
Base class for run manager (a bound callback manager).
callbacks.manager.CallbackManager(handlers)
Callback manager that handles callbacks from LangChain.
callbacks.manager.CallbackManagerForChainGroup(...)
Callback manager for the chain group.
callbacks.manager.CallbackManagerForChainRun(*, ...)
Callback manager for chain run.
callbacks.manager.CallbackManagerForLLMRun(*, ...)
Callback manager for LLM run.
callbacks.manager.CallbackManagerForRetrieverRun(*, ...)
Callback manager for retriever run.
callbacks.manager.CallbackManagerForToolRun(*, ...)
Callback manager for tool run.
callbacks.manager.ParentRunManager(*, ...[, ...])
Sync Parent Run Manager.
callbacks.manager.RunManager(*, run_id, ...)
Sync Run Manager.
callbacks.stdout.StdOutCallbackHandler([color])
Callback Handler that prints to std out.
callbacks.streaming_stdout.StreamingStdOutCallbackHandler()
Callback handler for streaming.
Functions¶
callbacks.manager.ahandle_event(handlers, ...)
Generic event handler for AsyncCallbackManager.
callbacks.manager.atrace_as_chain_group(...)
Get an async callback manager for a chain group in a context manager.
callbacks.manager.handle_event(handlers, ...)
Generic event handler for CallbackManager. | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-3 | callbacks.manager.handle_event(handlers, ...)
Generic event handler for CallbackManager.
callbacks.manager.shielded(func)
Makes so an awaitable method is always shielded from cancellation
callbacks.manager.trace_as_chain_group(...)
Get a callback manager for a chain group in a context manager.
langchain_core.chat_history¶
Chat message history stores a history of the message interactions in a chat.
Class hierarchy:
BaseChatMessageHistory --> <name>ChatMessageHistory # Examples: FileChatMessageHistory, PostgresChatMessageHistory
Main helpers:
AIMessage, HumanMessage, BaseMessage
Classes¶
chat_history.BaseChatMessageHistory()
Abstract base class for storing chat message history.
langchain_core.chat_sessions¶
Chat Sessions are a collection of messages and function calls.
Classes¶
chat_sessions.ChatSession
Chat Session represents a single conversation, channel, or other group of messages.
langchain_core.document_loaders¶
Classes¶
document_loaders.base.BaseBlobParser()
Abstract interface for blob parsers.
document_loaders.base.BaseLoader()
Interface for Document Loader.
document_loaders.blob_loaders.Blob
Blob represents raw data by either reference or value.
document_loaders.blob_loaders.BlobLoader()
Abstract interface for blob loaders implementation.
langchain_core.documents¶
Document module is a collection of classes that handle documents
and their transformations.
Classes¶
documents.base.Document
Class for storing a piece of text and associated metadata.
documents.compressor.BaseDocumentCompressor
Base class for document compressors.
documents.transformers.BaseDocumentTransformer()
Abstract base class for document transformation systems.
langchain_core.embeddings¶
Classes¶
embeddings.embeddings.Embeddings()
Interface for embedding models.
embeddings.fake.DeterministicFakeEmbedding
Fake embedding model that always returns the same embedding vector for the same text. | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-4 | Fake embedding model that always returns the same embedding vector for the same text.
embeddings.fake.FakeEmbeddings
Fake embedding model.
langchain_core.example_selectors¶
Example selector implements logic for selecting examples to include them
in prompts.
This allows us to select examples that are most relevant to the input.
Classes¶
example_selectors.base.BaseExampleSelector()
Interface for selecting examples to include in prompts.
example_selectors.length_based.LengthBasedExampleSelector
Select examples based on length.
example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector
ExampleSelector that selects examples based on Max Marginal Relevance.
example_selectors.semantic_similarity.SemanticSimilarityExampleSelector
Example selector that selects examples based on SemanticSimilarity.
Functions¶
example_selectors.semantic_similarity.sorted_values(values)
Return a list of values in dict sorted by key.
langchain_core.exceptions¶
Custom exceptions for LangChain.
Classes¶
exceptions.LangChainException
General LangChain exception.
exceptions.OutputParserException(error[, ...])
Exception that output parsers should raise to signify a parsing error.
exceptions.TracerException
Base class for exceptions in tracers module.
langchain_core.globals¶
Global values and configuration that apply to all of LangChain.
Functions¶
globals.get_debug()
Get the value of the debug global setting.
globals.get_llm_cache()
Get the value of the llm_cache global setting.
globals.get_verbose()
Get the value of the verbose global setting.
globals.set_debug(value)
Set a new value for the debug global setting.
globals.set_llm_cache(value)
Set a new LLM cache, overwriting the previous value, if any.
globals.set_verbose(value)
Set a new value for the verbose global setting.
langchain_core.language_models¶
Language Model is a type of model that can generate text or complete | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-5 | Language Model is a type of model that can generate text or complete
text prompts.
LangChain has two main classes to work with language models:
- LLM classes provide access to the large language model (LLM) APIs and services.
- Chat Models are a variation on language models.
Class hierarchy:
BaseLanguageModel --> BaseLLM --> LLM --> <name> # Examples: AI21, HuggingFaceHub, OpenAI
--> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
LLMResult, PromptValue,
CallbackManagerForLLMRun, AsyncCallbackManagerForLLMRun,
CallbackManager, AsyncCallbackManager,
AIMessage, BaseMessage, HumanMessage
Classes¶
language_models.base.BaseLanguageModel
Abstract base class for interfacing with language models.
language_models.chat_models.BaseChatModel
Base class for Chat models.
language_models.chat_models.SimpleChatModel
A simplified implementation for a chat model to inherit from.
language_models.fake.FakeListLLM
Fake LLM for testing purposes.
language_models.fake.FakeStreamingListLLM
Fake streaming list LLM for testing purposes.
language_models.fake_chat_models.FakeChatModel
Fake Chat Model wrapper for testing purposes.
language_models.fake_chat_models.FakeListChatModel
Fake ChatModel for testing purposes.
language_models.fake_chat_models.FakeMessagesListChatModel
Fake ChatModel for testing purposes.
language_models.fake_chat_models.GenericFakeChatModel
A generic fake chat model that can be used to test the chat model interface.
language_models.fake_chat_models.ParrotFakeChatModel
A generic fake chat model that can be used to test the chat model interface.
language_models.llms.BaseLLM
Base LLM abstract interface.
language_models.llms.LLM | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-6 | Base LLM abstract interface.
language_models.llms.LLM
This class exposes a simple interface for implementing a custom LLM.
Functions¶
language_models.chat_models.agenerate_from_stream(stream)
Async generate from a stream.
language_models.chat_models.generate_from_stream(stream)
Generate from a stream.
language_models.llms.aget_prompts(params, ...)
Get prompts that are already cached.
language_models.llms.aupdate_cache(cache, ...)
Update the cache and get the LLM output.
language_models.llms.create_base_retry_decorator(...)
Create a retry decorator for a given LLM and provided list of error types.
language_models.llms.get_prompts(params, prompts)
Get prompts that are already cached.
language_models.llms.update_cache(cache, ...)
Update the cache and get the LLM output.
langchain_core.load¶
Load module helps with serialization and deserialization.
Classes¶
load.load.Reviver([secrets_map, ...])
Reviver for JSON objects.
load.serializable.BaseSerialized
Base class for serialized objects.
load.serializable.Serializable
Serializable base class.
load.serializable.SerializedConstructor
Serialized constructor.
load.serializable.SerializedNotImplemented
Serialized not implemented.
load.serializable.SerializedSecret
Serialized secret.
Functions¶
load.dump.default(obj)
Return a default value for a Serializable object or a SerializedNotImplemented object.
load.dump.dumpd(obj)
Return a json dict representation of an object.
load.dump.dumps(obj, *[, pretty])
Return a json string representation of an object.
load.load.load(obj, *[, secrets_map, ...])
[Beta] Revive a LangChain class from a JSON object.
load.load.loads(text, *[, secrets_map, ...]) | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-7 | load.load.loads(text, *[, secrets_map, ...])
[Beta] Revive a LangChain class from a JSON string.
load.serializable.to_json_not_implemented(obj)
Serialize a "not implemented" object.
load.serializable.try_neq_default(value, ...)
Try to determine if a value is different from the default.
langchain_core.memory¶
Memory maintains Chain state, incorporating context from past runs.
Class hierarchy for Memory:
BaseMemory --> <name>Memory --> <name>Memory # Examples: BaseChatMemory -> MotorheadMemory
Classes¶
memory.BaseMemory
Abstract base class for memory in Chains.
langchain_core.messages¶
Messages are objects used in prompts and chat conversations.
Class hierarchy:
BaseMessage --> SystemMessage, AIMessage, HumanMessage, ChatMessage, FunctionMessage, ToolMessage
--> BaseMessageChunk --> SystemMessageChunk, AIMessageChunk, HumanMessageChunk, ChatMessageChunk, FunctionMessageChunk, ToolMessageChunk
Main helpers:
ChatPromptTemplate
Classes¶
messages.ai.AIMessage
Message from an AI.
messages.ai.AIMessageChunk
Message chunk from an AI.
messages.base.BaseMessage
Base abstract Message class.
messages.base.BaseMessageChunk
Message chunk, which can be concatenated with other Message chunks.
messages.chat.ChatMessage
Message that can be assigned an arbitrary speaker (i.e.
messages.chat.ChatMessageChunk
Chat Message chunk.
messages.function.FunctionMessage
Message for passing the result of executing a function back to a model.
messages.function.FunctionMessageChunk
Function Message chunk.
messages.human.HumanMessage
Message from a human.
messages.human.HumanMessageChunk
Human Message chunk.
messages.system.SystemMessage
Message for priming AI behavior, usually passed in as the first of a sequence of input messages.
messages.system.SystemMessageChunk | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-8 | messages.system.SystemMessageChunk
System Message chunk.
messages.tool.InvalidToolCall
Allowance for errors made by LLM.
messages.tool.ToolCall
Represents a request to call a tool.
messages.tool.ToolCallChunk
A chunk of a tool call (e.g., as part of a stream).
messages.tool.ToolMessage
Message for passing the result of executing a tool back to a model.
messages.tool.ToolMessageChunk
Tool Message chunk.
Functions¶
messages.base.get_msg_title_repr(title, *[, ...])
Get a title representation for a message.
messages.base.merge_content(first_content, ...)
Merge two message contents.
messages.base.message_to_dict(message)
Convert a Message to a dictionary.
messages.base.messages_to_dict(messages)
Convert a sequence of Messages to a list of dictionaries.
messages.tool.default_tool_chunk_parser(...)
Best-effort parsing of tool chunks.
messages.tool.default_tool_parser(raw_tool_calls)
Best-effort parsing of tools.
messages.utils.convert_to_messages(messages)
Convert a sequence of messages to a list of messages.
messages.utils.get_buffer_string(messages[, ...])
Convert a sequence of Messages to strings and concatenate them into one string.
messages.utils.message_chunk_to_message(chunk)
Convert a message chunk to a message.
messages.utils.messages_from_dict(messages)
Convert a sequence of messages from dicts to Message objects.
langchain_core.output_parsers¶
OutputParser classes parse the output of an LLM call.
Class hierarchy:
BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser # ListOutputParser, PydanticOutputParser
Main helpers:
Serializable, Generation, PromptValue
Classes¶
output_parsers.base.BaseGenerationOutputParser
Base class to parse the output of an LLM call.
output_parsers.base.BaseLLMOutputParser() | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-9 | output_parsers.base.BaseLLMOutputParser()
Abstract base class for parsing the outputs of a model.
output_parsers.base.BaseOutputParser
Base class to parse the output of an LLM call.
output_parsers.json.JsonOutputParser
Parse the output of an LLM call to a JSON object.
output_parsers.json.SimpleJsonOutputParser
alias of JsonOutputParser
output_parsers.list.CommaSeparatedListOutputParser
Parse the output of an LLM call to a comma-separated list.
output_parsers.list.ListOutputParser
Parse the output of an LLM call to a list.
output_parsers.list.MarkdownListOutputParser
Parse a markdown list.
output_parsers.list.NumberedListOutputParser
Parse a numbered list.
output_parsers.openai_functions.JsonKeyOutputFunctionsParser
Parse an output as the element of the Json object.
output_parsers.openai_functions.JsonOutputFunctionsParser
Parse an output as the Json object.
output_parsers.openai_functions.OutputFunctionsParser
Parse an output that is one of sets of values.
output_parsers.openai_functions.PydanticAttrOutputFunctionsParser
Parse an output as an attribute of a pydantic object.
output_parsers.openai_functions.PydanticOutputFunctionsParser
Parse an output as a pydantic object.
output_parsers.openai_tools.JsonOutputKeyToolsParser
Parse tools from OpenAI response.
output_parsers.openai_tools.JsonOutputToolsParser
Parse tools from OpenAI response.
output_parsers.openai_tools.PydanticToolsParser
Parse tools from OpenAI response.
output_parsers.pydantic.PydanticOutputParser
Parse an output using a pydantic model.
output_parsers.string.StrOutputParser
OutputParser that parses LLMResult into the top likely string.
output_parsers.transform.BaseCumulativeTransformOutputParser | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-10 | output_parsers.transform.BaseCumulativeTransformOutputParser
Base class for an output parser that can handle streaming input.
output_parsers.transform.BaseTransformOutputParser
Base class for an output parser that can handle streaming input.
output_parsers.xml.XMLOutputParser
Parse an output using xml format.
Functions¶
output_parsers.list.droplastn(iter, n)
Drop the last n elements of an iterator.
output_parsers.openai_tools.make_invalid_tool_call(...)
Create an InvalidToolCall from a raw tool call.
output_parsers.openai_tools.parse_tool_call(...)
Parse a single tool call.
output_parsers.openai_tools.parse_tool_calls(...)
Parse a list of tool calls.
output_parsers.xml.nested_element(path, elem)
Get nested element from path.
langchain_core.outputs¶
Output classes are used to represent the output of a language model call
and the output of a chat.
Classes¶
outputs.chat_generation.ChatGeneration
A single chat generation output.
outputs.chat_generation.ChatGenerationChunk
ChatGeneration chunk, which can be concatenated with other
outputs.chat_result.ChatResult
Class that contains all results for a single chat model call.
outputs.generation.Generation
A single text generation output.
outputs.generation.GenerationChunk
Generation chunk, which can be concatenated with other Generation chunks.
outputs.llm_result.LLMResult
Class that contains all results for a batched LLM call.
outputs.run_info.RunInfo
Class that contains metadata for a single execution of a Chain or model.
langchain_core.prompt_values¶
Prompt values for language model prompts.
Prompt values are used to represent different pieces of prompts.
They can be used to represent text, images, or chat message pieces.
Classes¶
prompt_values.ChatPromptValue
Chat prompt value.
prompt_values.ChatPromptValueConcrete | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-11 | prompt_values.ChatPromptValue
Chat prompt value.
prompt_values.ChatPromptValueConcrete
Chat prompt value which explicitly lists out the message types it accepts.
prompt_values.ImagePromptValue
Image prompt value.
prompt_values.ImageURL
prompt_values.PromptValue
Base abstract class for inputs to any language model.
prompt_values.StringPromptValue
String prompt value.
langchain_core.prompts¶
Prompt is the input to the model.
Prompt is often constructed
from multiple components and prompt values. Prompt classes and functions make constructing
and working with prompts easy.
Class hierarchy:
BasePromptTemplate --> PipelinePromptTemplate
StringPromptTemplate --> PromptTemplate
FewShotPromptTemplate
FewShotPromptWithTemplates
BaseChatPromptTemplate --> AutoGPTPrompt
ChatPromptTemplate --> AgentScratchPadChatPromptTemplate
BaseMessagePromptTemplate --> MessagesPlaceholder
BaseStringMessagePromptTemplate --> ChatMessagePromptTemplate
HumanMessagePromptTemplate
AIMessagePromptTemplate
SystemMessagePromptTemplate
Classes¶
prompts.base.BasePromptTemplate
Base class for all prompt templates, returning a prompt.
prompts.chat.AIMessagePromptTemplate
AI message prompt template.
prompts.chat.BaseChatPromptTemplate
Base class for chat prompt templates.
prompts.chat.BaseMessagePromptTemplate
Base class for message prompt templates.
prompts.chat.BaseStringMessagePromptTemplate
Base class for message prompt templates that use a string prompt template.
prompts.chat.ChatMessagePromptTemplate
Chat message prompt template.
prompts.chat.ChatPromptTemplate
Prompt template for chat models.
prompts.chat.HumanMessagePromptTemplate
Human message prompt template.
prompts.chat.MessagesPlaceholder
Prompt template that assumes variable is already list of messages.
prompts.chat.SystemMessagePromptTemplate
System message prompt template.
prompts.few_shot.FewShotChatMessagePromptTemplate | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-12 | System message prompt template.
prompts.few_shot.FewShotChatMessagePromptTemplate
Chat prompt template that supports few-shot examples.
prompts.few_shot.FewShotPromptTemplate
Prompt template that contains few shot examples.
prompts.few_shot_with_templates.FewShotPromptWithTemplates
Prompt template that contains few shot examples.
prompts.image.ImagePromptTemplate
An image prompt template for a multimodal model.
prompts.pipeline.PipelinePromptTemplate
Prompt template for composing multiple prompt templates together.
prompts.prompt.PromptTemplate
A prompt template for a language model.
prompts.string.StringPromptTemplate
String prompt that exposes the format method, returning a prompt.
prompts.structured.StructuredPrompt
[Beta]
Functions¶
prompts.base.aformat_document(doc, prompt)
Format a document into a string based on a prompt template.
prompts.base.format_document(doc, prompt)
Format a document into a string based on a prompt template.
prompts.loading.load_prompt(path)
Unified method for loading a prompt from LangChainHub or local fs.
prompts.loading.load_prompt_from_config(config)
Load prompt from Config Dict.
prompts.string.check_valid_template(...)
Check that template string is valid.
prompts.string.get_template_variables(...)
Get the variables from the template.
prompts.string.jinja2_formatter(template, ...)
Format a template using jinja2.
prompts.string.mustache_formatter(template, ...)
Format a template using mustache.
prompts.string.mustache_schema(template)
Get the variables from a mustache template.
prompts.string.mustache_template_vars(template)
Get the variables from a mustache template.
prompts.string.validate_jinja2(template, ...)
Validate that the input variables are valid for the template.
langchain_core.retrievers¶ | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-13 | Validate that the input variables are valid for the template.
langchain_core.retrievers¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to
store documents, only to return (or retrieve) it. Vector stores can be used as
the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever
Main helpers:
RetrieverInput, RetrieverOutput, RetrieverLike, RetrieverOutputLike,
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes¶
retrievers.BaseRetriever
Abstract base class for a Document retrieval system.
langchain_core.runnables¶
LangChain Runnable and the LangChain Expression Language (LCEL).
The LangChain Expression Language (LCEL) offers a declarative method to build
production-grade programs that harness the power of LLMs.
Programs created using LCEL and LangChain Runnables inherently support
synchronous, asynchronous, batch, and streaming operations.
Support for async allows servers hosting LCEL based programs to scale better
for higher concurrent loads.
Batch operations allow for processing multiple inputs in parallel.
Streaming of intermediate outputs, as they’re being generated, allows for
creating more responsive UX.
This module contains schema and implementation of LangChain Runnables primitives.
Classes¶
runnables.base.Runnable()
A unit of work that can be invoked, batched, streamed, transformed and composed.
runnables.base.RunnableBinding
Wrap a Runnable with additional functionality.
runnables.base.RunnableBindingBase | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-14 | Wrap a Runnable with additional functionality.
runnables.base.RunnableBindingBase
Runnable that delegates calls to another Runnable with a set of kwargs.
runnables.base.RunnableEach
Runnable that delegates calls to another Runnable with each element of the input sequence.
runnables.base.RunnableEachBase
Runnable that delegates calls to another Runnable with each element of the input sequence.
runnables.base.RunnableGenerator(transform)
Runnable that runs a generator function.
runnables.base.RunnableLambda(func[, afunc, ...])
RunnableLambda converts a python callable into a Runnable.
runnables.base.RunnableMap
alias of RunnableParallel
runnables.base.RunnableParallel
Runnable that runs a mapping of Runnables in parallel, and returns a mapping of their outputs.
runnables.base.RunnableSequence
Sequence of Runnables, where the output of each is the input of the next.
runnables.base.RunnableSerializable
Runnable that can be serialized to JSON.
runnables.branch.RunnableBranch
Runnable that selects which branch to run based on a condition.
runnables.config.ContextThreadPoolExecutor([...])
ThreadPoolExecutor that copies the context to the child thread.
runnables.config.EmptyDict
Empty dict type.
runnables.config.RunnableConfig
Configuration for a Runnable.
runnables.configurable.DynamicRunnable
Serializable Runnable that can be dynamically configured.
runnables.configurable.RunnableConfigurableAlternatives
Runnable that can be dynamically configured.
runnables.configurable.RunnableConfigurableFields
Runnable that can be dynamically configured.
runnables.configurable.StrEnum(value)
String enum.
runnables.fallbacks.RunnableWithFallbacks
Runnable that can fallback to other Runnables if it fails.
runnables.graph.Branch(condition, ends)
Branch in a graph. | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-15 | runnables.graph.Branch(condition, ends)
Branch in a graph.
runnables.graph.CurveStyle(value)
Enum for different curve styles supported by Mermaid
runnables.graph.Edge(source, target[, data])
Edge in a graph.
runnables.graph.Graph(nodes, ...)
Graph of nodes and edges.
runnables.graph.LabelsDict
runnables.graph.MermaidDrawMethod(value)
Enum for different draw methods supported by Mermaid
runnables.graph.Node(id, data)
Node in a graph.
runnables.graph.NodeColors([start, end, other])
Schema for Hexadecimal color codes for different node types
runnables.graph_ascii.AsciiCanvas(cols, lines)
Class for drawing in ASCII.
runnables.graph_ascii.VertexViewer(name)
Class to define vertex box boundaries that will be accounted for during graph building by grandalf.
runnables.graph_png.PngDrawer([fontname, labels])
A helper class to draw a state graph into a PNG file. Requires graphviz and pygraphviz to be installed. :param fontname: The font to use for the labels :param labels: A dictionary of label overrides. The dictionary should have the following format: { "nodes": { "node1": "CustomLabel1", "node2": "CustomLabel2", "__end__": "End Node" }, "edges": { "continue": "ContinueLabel", "end": "EndLabel" } } The keys are the original labels, and the values are the new labels. Usage: drawer = PngDrawer() drawer.draw(state_graph, 'graph.png').
runnables.history.RunnableWithMessageHistory
Runnable that manages chat message history for another Runnable.
runnables.passthrough.RunnableAssign | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-16 | runnables.passthrough.RunnableAssign
A runnable that assigns key-value pairs to Dict[str, Any] inputs.
runnables.passthrough.RunnablePassthrough
Runnable to passthrough inputs unchanged or with additional keys.
runnables.passthrough.RunnablePick
Runnable that picks keys from Dict[str, Any] inputs.
runnables.retry.RunnableRetry
Retry a Runnable if it fails.
runnables.router.RouterInput
Router input.
runnables.router.RouterRunnable
Runnable that routes to a set of Runnables based on Input['key'].
runnables.schema.EventData
Data associated with a streaming event.
runnables.schema.StreamEvent
Streaming event.
runnables.utils.AddableDict
Dictionary that can be added to another dictionary.
runnables.utils.ConfigurableField(id[, ...])
Field that can be configured by the user.
runnables.utils.ConfigurableFieldMultiOption(id, ...)
Field that can be configured by the user with multiple default values.
runnables.utils.ConfigurableFieldSingleOption(id, ...)
Field that can be configured by the user with a default value.
runnables.utils.ConfigurableFieldSpec(id, ...)
Field that can be configured by the user.
runnables.utils.FunctionNonLocals()
Get the nonlocal variables accessed of a function.
runnables.utils.GetLambdaSource()
Get the source code of a lambda function.
runnables.utils.IsFunctionArgDict()
Check if the first argument of a function is a dict.
runnables.utils.IsLocalDict(name, keys)
Check if a name is a local dict.
runnables.utils.NonLocals()
Get nonlocal variables accessed.
runnables.utils.SupportsAdd(*args, **kwargs)
Protocol for objects that support addition.
Functions¶ | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-17 | Protocol for objects that support addition.
Functions¶
runnables.base.chain()
Decorate a function to make it a Runnable.
runnables.base.coerce_to_runnable(thing)
Coerce a runnable-like object into a Runnable.
runnables.config.acall_func_with_variable_args(...)
Call function that may optionally accept a run_manager and/or config.
runnables.config.call_func_with_variable_args(...)
Call function that may optionally accept a run_manager and/or config.
runnables.config.ensure_config([config])
Ensure that a config is a dict with all keys present.
runnables.config.get_async_callback_manager_for_config(config)
Get an async callback manager for a config.
runnables.config.get_callback_manager_for_config(config)
Get a callback manager for a config.
runnables.config.get_config_list(config, length)
Get a list of configs from a single config or a list of configs.
runnables.config.get_executor_for_config(config)
Get an executor for a config.
runnables.config.merge_configs(*configs)
Merge multiple configs into one.
runnables.config.patch_config(config, *[, ...])
Patch a config with new values.
runnables.config.run_in_executor(...)
Run a function in an executor.
runnables.configurable.make_options_spec(...)
Make a ConfigurableFieldSpec for a ConfigurableFieldSingleOption or ConfigurableFieldMultiOption.
runnables.configurable.prefix_config_spec(...)
Prefix the id of a ConfigurableFieldSpec.
runnables.graph.is_uuid(value)
runnables.graph.node_data_json(node, *[, ...])
runnables.graph.node_data_str(node)
runnables.graph_ascii.draw_ascii(vertices, edges)
Build a DAG and draw it in ASCII. | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-18 | Build a DAG and draw it in ASCII.
runnables.graph_mermaid.draw_mermaid(nodes, ...)
Draws a Mermaid graph using the provided graph data
runnables.graph_mermaid.draw_mermaid_png(...)
Draws a Mermaid graph as PNG using provided syntax.
runnables.passthrough.aidentity(x)
Async identity function
runnables.passthrough.identity(x)
Identity function
runnables.utils.aadd(addables)
Asynchronously add a sequence of addable objects together.
runnables.utils.accepts_config(callable)
Check if a callable accepts a config argument.
runnables.utils.accepts_context(callable)
Check if a callable accepts a context argument.
runnables.utils.accepts_run_manager(callable)
Check if a callable accepts a run_manager argument.
runnables.utils.adapt_first_streaming_chunk(chunk)
This might transform the first chunk of a stream into an AddableDict.
runnables.utils.add(addables)
Add a sequence of addable objects together.
runnables.utils.create_model(__model_name, ...)
runnables.utils.gated_coro(semaphore, coro)
Run a coroutine with a semaphore.
runnables.utils.gather_with_concurrency(n, ...)
Gather coroutines with a limit on the number of concurrent coroutines.
runnables.utils.get_function_first_arg_dict_keys(func)
Get the keys of the first argument of a function if it is a dict.
runnables.utils.get_function_nonlocals(func)
Get the nonlocal variables accessed by a function.
runnables.utils.get_lambda_source(func)
Get the source code of a lambda function.
runnables.utils.get_unique_config_specs(specs)
Get the unique config specs from a sequence of config specs. | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-19 | Get the unique config specs from a sequence of config specs.
runnables.utils.indent_lines_after_first(...)
Indent all lines of text after the first line.
langchain_core.stores¶
Store implements the key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform
to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Classes¶
stores.BaseStore()
Abstract interface for a key-value store.
langchain_core.sys_info¶
sys_info prints information about the system and langchain packages
for debugging purposes.
Functions¶
sys_info.print_sys_info(*[, additional_pkgs])
Print information about the environment for debugging purposes.
langchain_core.tools¶
Tools are classes that an Agent uses to interact with the world.
Each tool has a description. Agent uses the description to choose the right
tool for the job.
Class hierarchy:
RunnableSerializable --> BaseTool --> <name>Tool # Examples: AIPluginTool, BaseGraphQLTool
<name> # Examples: BraveSearch, HumanInputRun
Main helpers:
CallbackManagerForToolRun, AsyncCallbackManagerForToolRun
Classes¶
tools.BaseTool
Interface LangChain tools must implement.
tools.SchemaAnnotationError
Raised when 'args_schema' is missing or has an incorrect type annotation.
tools.StructuredTool
Tool that can operate on any number of inputs.
tools.Tool
Tool that takes in function or coroutine directly.
tools.ToolException
Optional exception that tool throws when execution error occurs.
Functions¶
tools.create_schema_from_function(...)
Create a pydantic schema from a function's signature.
tools.tool(*args[, return_direct, ...])
Make tools out of functions, can be used with or without arguments.
langchain_core.tracers¶
Tracers are classes for tracing runs. | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-20 | langchain_core.tracers¶
Tracers are classes for tracing runs.
Class hierarchy:
BaseCallbackHandler --> BaseTracer --> <name>Tracer # Examples: LangChainTracer, RootListenersTracer
--> <name> # Examples: LogStreamCallbackHandler
Classes¶
tracers.base.BaseTracer(*[, _schema_format])
Base interface for tracers.
tracers.evaluation.EvaluatorCallbackHandler(...)
Tracer that runs a run evaluator whenever a run is persisted.
tracers.langchain.LangChainTracer([...])
Implementation of the SharedTracer that POSTS to the LangChain endpoint.
tracers.log_stream.LogEntry
A single entry in the run log.
tracers.log_stream.LogStreamCallbackHandler(*)
Tracer that streams run logs to a stream.
tracers.log_stream.RunLog(*ops, state)
Run log.
tracers.log_stream.RunLogPatch(*ops)
Patch to the run log.
tracers.log_stream.RunState
State of the run.
tracers.root_listeners.RootListenersTracer(*, ...)
Tracer that calls listeners on run start, end, and error.
tracers.run_collector.RunCollectorCallbackHandler([...])
Tracer that collects all nested runs in a list.
tracers.schemas.BaseRun
[Deprecated] Base class for Run.
tracers.schemas.ChainRun
[Deprecated] Class for ChainRun.
tracers.schemas.LLMRun
[Deprecated] Class for LLMRun.
tracers.schemas.Run
Run schema for the V2 API in the Tracer.
tracers.schemas.ToolRun
[Deprecated] Class for ToolRun.
tracers.schemas.TracerSession
[Deprecated] TracerSessionV1 schema for the V2 API.
tracers.schemas.TracerSessionBase | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-21 | tracers.schemas.TracerSessionBase
[Deprecated] Base class for TracerSession.
tracers.schemas.TracerSessionV1
[Deprecated] TracerSessionV1 schema.
tracers.schemas.TracerSessionV1Base
[Deprecated] Base class for TracerSessionV1.
tracers.schemas.TracerSessionV1Create
[Deprecated] Create class for TracerSessionV1.
tracers.stdout.ConsoleCallbackHandler(**kwargs)
Tracer that prints to the console.
tracers.stdout.FunctionCallbackHandler(...)
Tracer that calls a function with a single str parameter.
Functions¶
tracers.context.collect_runs()
Collect all run traces in context.
tracers.context.register_configure_hook(...)
Register a configure hook.
tracers.context.tracing_enabled([session_name])
Throws an error because this has been replaced by tracing_v2_enabled.
tracers.context.tracing_v2_enabled([...])
Instruct LangChain to log all runs in context to LangSmith.
tracers.evaluation.wait_for_all_evaluators()
Wait for all tracers to finish.
tracers.langchain.get_client()
Get the client.
tracers.langchain.log_error_once(method, ...)
Log an error once.
tracers.langchain.wait_for_all_tracers()
Wait for all tracers to finish.
tracers.langchain_v1.LangChainTracerV1(...)
tracers.langchain_v1.get_headers(*args, **kwargs)
tracers.schemas.RunTypeEnum()
[Deprecated] RunTypeEnum.
tracers.stdout.elapsed(run)
Get the elapsed time of a run.
tracers.stdout.try_json_stringify(obj, fallback)
Try to stringify an object to JSON.
langchain_core.utils¶
Utility functions for LangChain.
These functions do not depend on any other LangChain module.
Classes¶ | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-22 | These functions do not depend on any other LangChain module.
Classes¶
utils.aiter.NoLock()
Dummy lock that provides the proper interface but no protection
utils.aiter.Tee(iterable[, n, lock])
Create n separate asynchronous iterators over iterable
utils.aiter.atee
alias of Tee
utils.formatting.StrictFormatter()
Formatter that checks for extra keys.
utils.function_calling.FunctionDescription
Representation of a callable function to send to an LLM.
utils.function_calling.ToolDescription
Representation of a callable function to the OpenAI API.
utils.iter.NoLock()
Dummy lock that provides the proper interface but no protection
utils.iter.Tee(iterable[, n, lock])
Create n separate asynchronous iterators over iterable
utils.iter.safetee
alias of Tee
utils.mustache.ChevronError
Functions¶
utils.aiter.py_anext(iterator[, default])
Pure-Python implementation of anext() for testing purposes.
utils.aiter.tee_peer(iterator, buffer, ...)
An individual iterator of a tee()
utils.env.env_var_is_set(env_var)
Check if an environment variable is set.
utils.env.get_from_dict_or_env(data, key, ...)
Get a value from a dictionary or an environment variable.
utils.env.get_from_env(key, env_key[, default])
Get a value from a dictionary or an environment variable.
utils.function_calling.convert_pydantic_to_openai_function(...)
[Deprecated] Converts a Pydantic model to a function description for the OpenAI API.
utils.function_calling.convert_pydantic_to_openai_tool(...)
[Deprecated] Converts a Pydantic model to a function description for the OpenAI API.
utils.function_calling.convert_python_function_to_openai_function(...) | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-23 | utils.function_calling.convert_python_function_to_openai_function(...)
[Deprecated] Convert a Python function to an OpenAI function-calling API compatible dict.
utils.function_calling.convert_to_openai_function(...)
Convert a raw function/class to an OpenAI function.
utils.function_calling.convert_to_openai_tool(tool)
Convert a raw function/class to an OpenAI tool.
utils.function_calling.format_tool_to_openai_function(tool)
[Deprecated] Format tool into the OpenAI function API.
utils.function_calling.format_tool_to_openai_tool(tool)
[Deprecated] Format tool into the OpenAI function API.
utils.function_calling.tool_example_to_messages(...)
Convert an example into a list of messages that can be fed into an LLM.
utils.html.extract_sub_links(raw_html, url, *)
Extract all links from a raw html string and convert into absolute paths.
utils.html.find_all_links(raw_html, *[, pattern])
Extract all links from a raw html string.
utils.image.encode_image(image_path)
Get base64 string from image URI.
utils.image.image_to_data_url(image_path)
utils.input.get_bolded_text(text)
Get bolded text.
utils.input.get_color_mapping(items[, ...])
Get mapping for items to a support color.
utils.input.get_colored_text(text, color)
Get colored text.
utils.input.print_text(text[, color, end, file])
Print text with highlighting and no end characters.
utils.interactive_env.is_interactive_env()
Determine if running within IPython or Jupyter.
utils.iter.batch_iterate(size, iterable)
Utility batching function.
utils.iter.tee_peer(iterator, buffer, peers, ...)
An individual iterator of a tee()
utils.json.parse_and_check_json_markdown(...) | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-24 | An individual iterator of a tee()
utils.json.parse_and_check_json_markdown(...)
Parse a JSON string from a Markdown string and check that it contains the expected keys.
utils.json.parse_json_markdown(json_string, *)
Parse a JSON string from a Markdown string.
utils.json.parse_partial_json(s, *[, strict])
Parse a JSON string that may be missing closing braces.
utils.json_schema.dereference_refs(schema_obj, *)
Try to substitute $refs in JSON Schema.
utils.loading.try_load_from_hub(*args, **kwargs)
[Deprecated]
utils.mustache.grab_literal(template, l_del)
Parse a literal from the template
utils.mustache.l_sa_check(template, literal, ...)
Do a preliminary check to see if a tag could be a standalone
utils.mustache.parse_tag(template, l_del, r_del)
Parse a tag from a template
utils.mustache.r_sa_check(template, ...)
Do a final checkto see if a tag could be a standalone
utils.mustache.render([template, data, ...])
Render a mustache template.
utils.mustache.tokenize(template[, ...])
Tokenize a mustache template
utils.pydantic.get_pydantic_major_version()
Get the major version of Pydantic.
utils.strings.comma_list(items)
Convert a list to a comma-separated string.
utils.strings.stringify_dict(data)
Stringify a dictionary.
utils.strings.stringify_value(val)
Stringify a value.
utils.utils.build_extra_kwargs(extra_kwargs, ...)
Build extra kwargs from values and extra_kwargs.
utils.utils.check_package_version(package[, ...])
Check the version of a package.
utils.utils.convert_to_secret_str(value)
Convert a string to a SecretStr if needed.
utils.utils.get_pydantic_field_names(...) | https://api.python.langchain.com/en/latest/core_api_reference.html |
d0e93f460bdd-25 | utils.utils.get_pydantic_field_names(...)
Get field names, including aliases, for a pydantic class.
utils.utils.guard_import(module_name, *[, ...])
Dynamically imports a module and raises a helpful exception if the module is not installed.
utils.utils.mock_now(dt_value)
Context manager for mocking out datetime.now() in unit tests.
utils.utils.raise_for_status_with_text(response)
Raise an error with the response text.
utils.utils.xor_args(*arg_groups)
Validate specified keyword args are mutually exclusive.
langchain_core.vectorstores¶
Vector store stores embedded data and performs vector search.
One of the most common ways to store and search over unstructured data is to
embed it and store the resulting embedding vectors, and then query the store
and retrieve the data that are ‘most similar’ to the embedded query.
Class hierarchy:
VectorStore --> <name> # Examples: Annoy, FAISS, Milvus
BaseRetriever --> VectorStoreRetriever --> <name>Retriever # Example: VespaRetriever
Main helpers:
Embeddings, Document
Classes¶
vectorstores.VectorStore()
Interface for vector store.
vectorstores.VectorStoreRetriever
Base Retriever class for VectorStore. | https://api.python.langchain.com/en/latest/core_api_reference.html |
de5cfeacfdb9-0 | langchain_pinecone 0.1.0¶
langchain_pinecone.vectorstores¶
Classes¶
vectorstores.Pinecone([index, embedding, ...])
[Deprecated] Deprecated.
vectorstores.PineconeVectorStore([index, ...])
Pinecone vector store. | https://api.python.langchain.com/en/latest/pinecone_api_reference.html |
c5484df7a507-0 | langchain_nvidia_ai_endpoints 0.0.6¶
langchain_nvidia_ai_endpoints.callbacks¶
Callback Handler that prints to std out.
Classes¶
callbacks.UsageCallbackHandler()
Callback Handler that tracks OpenAI info.
Functions¶
callbacks.get_token_cost_for_model(...[, ...])
Get the cost in USD for a given model and number of tokens.
callbacks.get_usage_callback([price_map, ...])
Get the OpenAI callback handler in a context manager.
callbacks.standardize_model_name(model_name)
Standardize the model name to a format that can be used in the OpenAI API.
langchain_nvidia_ai_endpoints.chat_models¶
Chat Model Components Derived from ChatModel/NVIDIA
Classes¶
chat_models.ChatNVIDIA
NVIDIA chat model.
langchain_nvidia_ai_endpoints.embeddings¶
Embeddings Components Derived from NVEModel/Embeddings
Classes¶
embeddings.NVIDIAEmbeddings
NVIDIA's AI Foundation Retriever Question-Answering Asymmetric Model.
langchain_nvidia_ai_endpoints.image_gen¶
Embeddings Components Derived from NVEModel/Embeddings
Classes¶
image_gen.ImageGenNVIDIA
NVIDIA's AI Foundation Retriever Question-Answering Asymmetric Model.
Functions¶
image_gen.ImageParser()
langchain_nvidia_ai_endpoints.llm¶
Classes¶
llm.NVIDIA
NVIDIA chat model.
langchain_nvidia_ai_endpoints.tools¶
OpenAI chat wrapper.
Classes¶
tools.ServerToolsMixin() | https://api.python.langchain.com/en/latest/nvidia_ai_endpoints_api_reference.html |
2da5edf46e7e-0 | langchain_mongodb 0.1.3¶
langchain_mongodb.cache¶
LangChain MongoDB Caches
Functions “_loads_generations” and “_dumps_generations”
are duplicated in this utility from modules:
“libs/community/langchain_community/cache.py”
Classes¶
cache.MongoDBAtlasSemanticCache(...[, ...])
MongoDB Atlas Semantic cache.
cache.MongoDBCache(connection_string[, ...])
MongoDB Atlas cache
langchain_mongodb.chat_message_histories¶
Classes¶
chat_message_histories.MongoDBChatMessageHistory(...)
Chat message history that stores history in MongoDB.
langchain_mongodb.utils¶
Tools for the Maximal Marginal Relevance (MMR) reranking.
Duplicated from langchain_community to avoid cross-dependencies.
Functions “maximal_marginal_relevance” and “cosine_similarity”
are duplicated in this utility respectively from modules:
“libs/community/langchain_community/vectorstores/utils.py”
“libs/community/langchain_community/utils/math.py”
Functions¶
utils.cosine_similarity(X, Y)
Row-wise cosine similarity between two equal-width matrices.
utils.maximal_marginal_relevance(...[, ...])
Calculate maximal marginal relevance.
langchain_mongodb.vectorstores¶
Classes¶
vectorstores.MongoDBAtlasVectorSearch(...[, ...])
MongoDB Atlas Vector Search vector store. | https://api.python.langchain.com/en/latest/mongodb_api_reference.html |
aa5088e56cd8-0 | langchain_anthropic 0.1.8¶
langchain_anthropic.chat_models¶
Classes¶
chat_models.AnthropicTool
chat_models.ChatAnthropic
Anthropic chat model.
chat_models.ChatAnthropicMessages
[Deprecated]
Functions¶
chat_models.convert_to_anthropic_tool(tool)
langchain_anthropic.experimental¶
Classes¶
experimental.ChatAnthropicTools
[Deprecated] Chat model for interacting with Anthropic functions.
Functions¶
experimental.get_system_message(tools)
langchain_anthropic.llms¶
Classes¶
llms.Anthropic
[Deprecated]
llms.AnthropicLLM
Anthropic large language model.
langchain_anthropic.output_parsers¶
Classes¶
output_parsers.ToolsOutputParser
Fields
Functions¶
output_parsers.extract_tool_calls(content) | https://api.python.langchain.com/en/latest/anthropic_api_reference.html |
17d516ce642e-0 | langchain_google_genai 1.0.1¶
langchain_google_genai.chat_models¶
Classes¶
chat_models.ChatGoogleGenerativeAI
Google Generative AI Chat models API.
chat_models.ChatGoogleGenerativeAIError
Custom exception class for errors associated with the Google GenAI API.
langchain_google_genai.embeddings¶
Classes¶
embeddings.GoogleGenerativeAIEmbeddings
Google Generative AI Embeddings.
langchain_google_genai.genai_aqa¶
Google GenerativeAI Attributed Question and Answering (AQA) service.
The GenAI Semantic AQA API is a managed end to end service that allows
developers to create responses grounded on specified passages based on
a user query. For more information visit:
https://developers.generativeai.google/guide
Classes¶
genai_aqa.AqaInput
Input to GenAIAqa.invoke.
genai_aqa.AqaOutput
Output from GenAIAqa.invoke.
genai_aqa.GenAIAqa
Google's Attributed Question and Answering service.
langchain_google_genai.google_vector_store¶
Google Generative AI Vector Store.
The GenAI Semantic Retriever API is a managed end-to-end service that allows
developers to create a corpus of documents to perform semantic search on
related passages given a user query. For more information visit:
https://developers.generativeai.google/guide
Classes¶
google_vector_store.DoesNotExistsException(*, ...)
google_vector_store.GoogleVectorStore(*, ...)
Google GenerativeAI Vector Store.
google_vector_store.ServerSideEmbedding()
Do nothing embedding model where the embedding is done by the server.
langchain_google_genai.llms¶
Classes¶
llms.GoogleGenerativeAI
Google GenerativeAI models.
llms.GoogleModelFamily(value)
An enumeration. | https://api.python.langchain.com/en/latest/google_genai_api_reference.html |
0dfe0a61ab8e-0 | langchain_elasticsearch 0.1.2¶
langchain_elasticsearch.chat_history¶
Classes¶
chat_history.ElasticsearchChatMessageHistory(...)
Chat message history that stores history in Elasticsearch.
langchain_elasticsearch.client¶
Functions¶
client.create_elasticsearch_client([url, ...])
langchain_elasticsearch.embeddings¶
Classes¶
embeddings.ElasticsearchEmbeddings(client, ...)
Elasticsearch embedding models.
langchain_elasticsearch.retrievers¶
Classes¶
retrievers.ElasticsearchRetriever
Elasticsearch retriever
langchain_elasticsearch.vectorstores¶
Classes¶
vectorstores.ApproxRetrievalStrategy([...])
Approximate retrieval strategy using the HNSW algorithm.
vectorstores.BM25RetrievalStrategy([k1, b])
Retrieval strategy using the native BM25 algorithm of Elasticsearch.
vectorstores.BaseRetrievalStrategy()
Base class for Elasticsearch retrieval strategies.
vectorstores.ElasticsearchStore(index_name, ...)
Elasticsearch vector store.
vectorstores.ExactRetrievalStrategy()
Exact retrieval strategy using the script_score query.
vectorstores.SparseRetrievalStrategy([model_id])
Sparse retrieval strategy using the text_expansion processor. | https://api.python.langchain.com/en/latest/elasticsearch_api_reference.html |
0266e69be0db-0 | langchain_together 0.1.0¶
langchain_together.embeddings¶
Classes¶
embeddings.TogetherEmbeddings
TogetherEmbeddings embedding model.
langchain_together.llms¶
Wrapper around Together AI’s Completion API.
Classes¶
llms.Together
LLM models from Together. | https://api.python.langchain.com/en/latest/together_api_reference.html |
89e6f519aff1-0 | langchain_airbyte 0.1.1¶ | https://api.python.langchain.com/en/latest/airbyte_api_reference.html |
b625571082bf-0 | langchain_nvidia_trt 0.0.1¶
langchain_nvidia_trt.llms¶
Classes¶
llms.StreamingResponseGenerator(llm, ...)
A Generator that provides the inference results from an LLM.
llms.TritonTensorRTError
Base exception for TritonTensorRT.
llms.TritonTensorRTLLM
TRTLLM triton models.
llms.TritonTensorRTRuntimeError
Runtime error for TritonTensorRT. | https://api.python.langchain.com/en/latest/nvidia_trt_api_reference.html |
9a15b3845bb9-0 | langchain_chroma 0.1.0rc1¶
langchain_chroma.vectorstores¶
Classes¶
vectorstores.Chroma([collection_name, ...])
ChromaDB vector store.
Functions¶
vectorstores.cosine_similarity(X, Y)
Row-wise cosine similarity between two equal-width matrices.
vectorstores.maximal_marginal_relevance(...)
Calculate maximal marginal relevance. | https://api.python.langchain.com/en/latest/chroma_api_reference.html |
137ac30ee62c-0 | langchain_ai21 0.1.3¶
langchain_ai21.ai21_base¶
Classes¶
ai21_base.AI21Base
Create a new model by parsing and validating input data from keyword arguments.
langchain_ai21.chat_models¶
Classes¶
chat_models.ChatAI21
ChatAI21 chat model.
langchain_ai21.contextual_answers¶
Classes¶
contextual_answers.AI21ContextualAnswers
Create a new model by parsing and validating input data from keyword arguments.
contextual_answers.ContextualAnswerInput
langchain_ai21.embeddings¶
Classes¶
embeddings.AI21Embeddings
AI21 Embeddings embedding model.
langchain_ai21.llms¶
Classes¶
llms.AI21LLM
AI21LLM large language models.
langchain_ai21.semantic_text_splitter¶
Classes¶
semantic_text_splitter.AI21SemanticTextSplitter([...])
Splitting text into coherent and readable units, based on distinct topics and lines | https://api.python.langchain.com/en/latest/ai21_api_reference.html |
89e897a27676-0 | langchain_voyageai 0.1.0¶
langchain_voyageai.embeddings¶
Classes¶
embeddings.VoyageAIEmbeddings
VoyageAIEmbeddings embedding model.
langchain_voyageai.rerank¶
Classes¶
rerank.VoyageAIRerank
Document compressor that uses VoyageAI Rerank API. | https://api.python.langchain.com/en/latest/voyageai_api_reference.html |
2c4fffe0d42a-0 | langchain_cohere 0.1.2¶
langchain_cohere.chat_models¶
Classes¶
chat_models.ChatCohere
Cohere chat large language models.
Functions¶
chat_models.get_cohere_chat_request(messages, *)
Get the request for the Cohere chat API.
chat_models.get_role(message)
Get the role of the message.
langchain_cohere.cohere_agent¶
Functions¶
cohere_agent.create_cohere_tools_agent(llm, ...)
langchain_cohere.common¶
Classes¶
common.CohereCitation(start, end, text, ...)
Cohere has fine-grained citations that specify the exact part of text.
langchain_cohere.embeddings¶
Classes¶
embeddings.CohereEmbeddings
Cohere embedding models.
langchain_cohere.llms¶
Classes¶
llms.BaseCohere
Base class for Cohere models.
llms.Cohere
Cohere large language models.
Functions¶
llms.acompletion_with_retry(llm, **kwargs)
Use tenacity to retry the completion call.
llms.completion_with_retry(llm, **kwargs)
Use tenacity to retry the completion call.
llms.enforce_stop_tokens(text, stop)
Cut off the text as soon as any stop words occur.
langchain_cohere.rag_retrievers¶
Classes¶
rag_retrievers.CohereRagRetriever
Cohere Chat API with RAG.
langchain_cohere.react_multi_hop¶
Classes¶
react_multi_hop.parsing.CohereToolsReactAgentOutputParser
Parses a message into agent actions/finish.
Functions¶
react_multi_hop.agent.create_cohere_react_agent(...) | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
2c4fffe0d42a-1 | Functions¶
react_multi_hop.agent.create_cohere_react_agent(...)
Create an agent that enables multiple tools to be used in sequence to complete a task.
react_multi_hop.parsing.parse_actions(generation)
Parse action selections from model output.
react_multi_hop.parsing.parse_answer_with_prefixes(...)
parses string into key-value pairs,
react_multi_hop.parsing.parse_citations(...)
Parses a grounded_generation (from parse_actions) and documents (from convert_to_documents) into a (generation, CohereCitation list) tuple.
react_multi_hop.parsing.parse_jsonified_tool_use_generation(...)
Parses model-generated jsonified actions.
react_multi_hop.prompt.convert_to_documents(...)
Converts observations into a 'document' dict
react_multi_hop.prompt.create_directly_answer_tool()
directly_answer is a special tool that's always presented to the model as an available tool.
react_multi_hop.prompt.multi_hop_prompt(...)
The returned function produces a BasePromptTemplate suitable for multi-hop.
react_multi_hop.prompt.render_intermediate_steps(...)
Renders an agent's intermediate steps into prompt content.
react_multi_hop.prompt.render_messages(messages)
Renders one or more BaseMessage implementations into prompt content.
react_multi_hop.prompt.render_observations(...)
Renders the 'output' part of an Agent's intermediate step into prompt content.
react_multi_hop.prompt.render_role(message)
Renders the role of a message into prompt content.
react_multi_hop.prompt.render_structured_preamble([...])
Renders the structured preamble part of the prompt content.
react_multi_hop.prompt.render_tool(tool)
Renders a tool into prompt content
react_multi_hop.prompt.render_tool_args(tool)
Renders the 'Args' section of a tool's prompt content.
react_multi_hop.prompt.render_tool_signature(tool)
Renders the signature of a tool into prompt content. | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
2c4fffe0d42a-2 | Renders the signature of a tool into prompt content.
react_multi_hop.prompt.render_type(type_, ...)
Renders a tool's type into prompt content.
langchain_cohere.rerank¶
Classes¶
rerank.CohereRerank
Document compressor that uses Cohere Rerank API. | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
34d68656a3c3-0 | langchain 0.1.16¶
langchain.agents¶
Agent is a class that uses an LLM to choose a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents,
a language model is used as a reasoning engine to determine which actions
to take and in which order.
Agents select and use Tools and Toolkits for actions.
Class hierarchy:
BaseSingleActionAgent --> LLMSingleActionAgent
OpenAIFunctionsAgent
XMLAgent
Agent --> <name>Agent # Examples: ZeroShotAgent, ChatAgent
BaseMultiActionAgent --> OpenAIMultiFunctionsAgent
Main helpers:
AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish
Classes¶
agents.agent.Agent
[Deprecated] Agent that calls the language model and deciding the action.
agents.agent.AgentExecutor
Agent that is using tools.
agents.agent.AgentOutputParser
Base class for parsing agent output into agent action/finish.
agents.agent.BaseMultiActionAgent
Base Multi Action Agent class.
agents.agent.BaseSingleActionAgent
Base Single Action Agent class.
agents.agent.ExceptionTool
Tool that just returns the query.
agents.agent.LLMSingleActionAgent
[Deprecated] Base class for single action agents.
agents.agent.MultiActionAgentOutputParser
Base class for parsing agent output into agent actions/finish.
agents.agent.RunnableAgent
Agent powered by runnables.
agents.agent.RunnableMultiActionAgent
Agent powered by runnables.
agents.agent_iterator.AgentExecutorIterator(...)
Iterator for AgentExecutor.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo
Information about a VectorStore.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit
Toolkit for routing between Vector Stores.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-1 | Toolkit for routing between Vector Stores.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit
Toolkit for interacting with a Vector Store.
agents.agent_types.AgentType(value)
[Deprecated] An enum for agent types.
agents.chat.base.ChatAgent
[Deprecated] Chat Agent.
agents.chat.output_parser.ChatOutputParser
Output parser for the chat agent.
agents.conversational.base.ConversationalAgent
[Deprecated] An agent that holds a conversation in addition to using tools.
agents.conversational.output_parser.ConvoOutputParser
Output parser for the conversational agent.
agents.conversational_chat.base.ConversationalChatAgent
[Deprecated] An agent designed to hold a conversation in addition to using tools.
agents.conversational_chat.output_parser.ConvoOutputParser
Output parser for the conversational agent.
agents.mrkl.base.ChainConfig(action_name, ...)
Configuration for chain to use in MRKL system.
agents.mrkl.base.MRKLChain
[Deprecated] [Deprecated] Chain that implements the MRKL system.
agents.mrkl.base.ZeroShotAgent
[Deprecated] Agent for the MRKL chain.
agents.mrkl.output_parser.MRKLOutputParser
MRKL Output parser for the chat agent.
agents.openai_assistant.base.OpenAIAssistantAction
AgentAction with info needed to submit custom tool output to existing run.
agents.openai_assistant.base.OpenAIAssistantFinish
AgentFinish with run and thread metadata.
agents.openai_assistant.base.OpenAIAssistantRunnable
Run an OpenAI Assistant.
agents.openai_functions_agent.agent_token_buffer_memory.AgentTokenBufferMemory
Memory used to save agent output AND intermediate steps.
agents.openai_functions_agent.base.OpenAIFunctionsAgent
[Deprecated] An Agent driven by OpenAIs function powered API.
agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-2 | agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent
[Deprecated] An Agent driven by OpenAIs function powered API.
agents.output_parsers.json.JSONAgentOutputParser
Parses tool invocations and final answers in JSON format.
agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser
Parses a message into agent action/finish.
agents.output_parsers.openai_tools.OpenAIToolsAgentOutputParser
Parses a message into agent actions/finish.
agents.output_parsers.react_json_single_input.ReActJsonSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input in json format.
agents.output_parsers.react_single_input.ReActSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input.
agents.output_parsers.self_ask.SelfAskOutputParser
Parses self-ask style LLM calls.
agents.output_parsers.tools.ToolAgentAction
Override init to support instantiation by position for backward compat.
agents.output_parsers.tools.ToolsAgentOutputParser
Parses a message into agent actions/finish.
agents.output_parsers.xml.XMLAgentOutputParser
Parses tool invocations and final answers in XML format.
agents.react.base.DocstoreExplorer(docstore)
[Deprecated] Class to assist with exploration of a document store.
agents.react.base.ReActChain
[Deprecated] [Deprecated] Chain that implements the ReAct paper.
agents.react.base.ReActDocstoreAgent
[Deprecated] Agent for the ReAct chain.
agents.react.base.ReActTextWorldAgent
[Deprecated] Agent for the ReAct TextWorld chain.
agents.react.output_parser.ReActOutputParser
Output parser for the ReAct agent.
agents.schema.AgentScratchPadChatPromptTemplate
Chat prompt template for the agent scratchpad. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-3 | agents.schema.AgentScratchPadChatPromptTemplate
Chat prompt template for the agent scratchpad.
agents.self_ask_with_search.base.SelfAskWithSearchAgent
[Deprecated] Agent for the self-ask-with-search paper.
agents.self_ask_with_search.base.SelfAskWithSearchChain
[Deprecated] [Deprecated] Chain that does self-ask with search.
agents.structured_chat.base.StructuredChatAgent
[Deprecated] Structured Chat Agent.
agents.structured_chat.output_parser.StructuredChatOutputParser
Output parser for the structured chat agent.
agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries
Output parser with retries for the structured chat agent.
agents.tools.InvalidTool
Tool that is run when invalid tool name is encountered by agent.
agents.xml.base.XMLAgent
[Deprecated] Agent that uses XML tags.
Functions¶
agents.agent_toolkits.conversational_retrieval.openai_functions.create_conversational_retrieval_agent(...)
A convenience method for creating a conversational retrieval agent.
agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(...)
Construct a VectorStore agent from an LLM and tools.
agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(...)
Construct a VectorStore router agent from an LLM and tools.
agents.format_scratchpad.log.format_log_to_str(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.log_to_messages.format_log_to_messages(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.openai_functions.format_to_openai_function_messages(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.openai_functions.format_to_openai_functions(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.tools.format_to_tool_messages(...) | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-4 | agents.format_scratchpad.tools.format_to_tool_messages(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.xml.format_xml(...)
Format the intermediate steps as XML.
agents.initialize.initialize_agent(tools, llm)
[Deprecated] Load an agent executor given tools and LLM.
agents.json_chat.base.create_json_chat_agent(...)
Create an agent that uses JSON to format its logic, build for Chat Models.
agents.load_tools.get_all_tool_names()
Get a list of all possible tool names.
agents.load_tools.load_huggingface_tool(...)
Loads a tool from the HuggingFace Hub.
agents.load_tools.load_tools(tool_names[, ...])
Load tools based on their name.
agents.loading.load_agent(path, **kwargs)
[Deprecated] Unified method for loading an agent from LangChainHub or local fs.
agents.loading.load_agent_from_config(config)
[Deprecated] Load agent from Config Dict.
agents.openai_functions_agent.base.create_openai_functions_agent(...)
Create an agent that uses OpenAI function calling.
agents.openai_tools.base.create_openai_tools_agent(...)
Create an agent that uses OpenAI tools.
agents.output_parsers.openai_tools.parse_ai_message_to_openai_tool_action(message)
Parse an AI message potentially containing tool_calls.
agents.output_parsers.tools.parse_ai_message_to_tool_action(message)
Parse an AI message potentially containing tool_calls.
agents.react.agent.create_react_agent(llm, ...)
Create an agent that uses ReAct prompting.
agents.self_ask_with_search.base.create_self_ask_with_search_agent(...)
Create an agent that uses self-ask with search prompting.
agents.structured_chat.base.create_structured_chat_agent(...)
Create an agent aimed at supporting tools with multiple inputs.
agents.tool_calling_agent.base.create_tool_calling_agent(...)
Create an agent that uses tools. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-5 | Create an agent that uses tools.
agents.utils.validate_tools_single_input(...)
Validate tools for single input.
agents.xml.base.create_xml_agent(llm, tools, ...)
Create an agent that uses XML to format its logic.
langchain.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
callbacks.file.FileCallbackHandler(filename)
Callback Handler that writes to a file.
callbacks.streaming_aiter.AsyncIteratorCallbackHandler()
Callback handler that returns an async iterator.
callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*)
Callback handler that returns an async iterator.
callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*)
Callback handler for streaming in agents.
callbacks.tracers.logging.LoggingCallbackHandler(logger)
Tracer that logs via the input Logger.
langchain.chains¶
Chains are easily reusable components linked together.
Chains encode a sequence of calls to components like models, document retrievers,
other Chains, etc., and provide a simple interface to this sequence.
The Chain interface makes it easy to create apps that are:
Stateful: add Memory to any Chain to give it state,
Observable: pass Callbacks to a Chain to execute additional functionality,
like logging, outside the main sequence of component calls,
Composable: combine Chains with other components, including other Chains.
Class hierarchy:
Chain --> <name>Chain # Examples: LLMChain, MapReduceChain, RouterChain
Classes¶
chains.api.base.APIChain
Chain that makes API calls and summarizes the responses to answer a question.
chains.api.openapi.chain.OpenAPIEndpointChain
Chain interacts with an OpenAPI endpoint using natural language.
chains.api.openapi.requests_chain.APIRequesterChain
Get the request parser. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-6 | chains.api.openapi.requests_chain.APIRequesterChain
Get the request parser.
chains.api.openapi.requests_chain.APIRequesterOutputParser
Parse the request and error tags.
chains.api.openapi.response_chain.APIResponderChain
Get the response parser.
chains.api.openapi.response_chain.APIResponderOutputParser
Parse the response and error tags.
chains.base.Chain
Abstract base class for creating structured sequences of calls to components.
chains.combine_documents.base.AnalyzeDocumentChain
Chain that splits documents, then analyzes it in pieces.
chains.combine_documents.base.BaseCombineDocumentsChain
Base interface for chains combining documents.
chains.combine_documents.map_reduce.MapReduceDocumentsChain
Combining documents by mapping a chain over them, then combining results.
chains.combine_documents.map_rerank.MapRerankDocumentsChain
Combining documents by mapping a chain over them, then reranking results.
chains.combine_documents.reduce.AsyncCombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.CombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.ReduceDocumentsChain
Combine documents by recursively reducing them.
chains.combine_documents.refine.RefineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
chains.combine_documents.stuff.StuffDocumentsChain
Chain that combines documents by stuffing into context.
chains.constitutional_ai.base.ConstitutionalChain
Chain for applying constitutional principles.
chains.constitutional_ai.models.ConstitutionalPrinciple
Class for a constitutional principle.
chains.conversation.base.ConversationChain
Chain to have a conversation and load context from memory.
chains.conversational_retrieval.base.BaseConversationalRetrievalChain
Chain for chatting with an index.
chains.conversational_retrieval.base.ChatVectorDBChain
Chain for chatting with a vector database.
chains.conversational_retrieval.base.ConversationalRetrievalChain | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-7 | chains.conversational_retrieval.base.ConversationalRetrievalChain
Chain for having a conversation based on retrieved documents.
chains.conversational_retrieval.base.InputType
Input type for ConversationalRetrievalChain.
chains.elasticsearch_database.base.ElasticsearchDatabaseChain
Chain for interacting with Elasticsearch Database.
chains.flare.base.FlareChain
Chain that combines a retriever, a question generator, and a response generator.
chains.flare.base.QuestionGeneratorChain
Chain that generates questions from uncertain spans.
chains.flare.prompts.FinishedOutputParser
Output parser that checks if the output is finished.
chains.graph_qa.arangodb.ArangoGraphQAChain
Chain for question-answering against a graph by generating AQL statements.
chains.graph_qa.base.GraphQAChain
Chain for question-answering against a graph.
chains.graph_qa.cypher.GraphCypherQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.cypher_utils.CypherQueryCorrector(schemas)
Used to correct relationship direction in generated Cypher statements.
chains.graph_qa.cypher_utils.Schema(...)
Create new instance of Schema(left_node, relation, right_node)
chains.graph_qa.falkordb.FalkorDBQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.gremlin.GremlinQAChain
Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.hugegraph.HugeGraphQAChain
Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.kuzu.KuzuQAChain
Question-answering against a graph by generating Cypher statements for Kùzu.
chains.graph_qa.nebulagraph.NebulaGraphQAChain | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-8 | chains.graph_qa.nebulagraph.NebulaGraphQAChain
Chain for question-answering against a graph by generating nGQL statements.
chains.graph_qa.neptune_cypher.NeptuneOpenCypherQAChain
Chain for question-answering against a Neptune graph by generating openCypher statements.
chains.graph_qa.neptune_sparql.NeptuneSparqlQAChain
Chain for question-answering against a Neptune graph by generating SPARQL statements.
chains.graph_qa.ontotext_graphdb.OntotextGraphDBQAChain
Question-answering against Ontotext GraphDB
chains.graph_qa.sparql.GraphSparqlQAChain
Question-answering against an RDF or OWL graph by generating SPARQL statements.
chains.hyde.base.HypotheticalDocumentEmbedder
Generate hypothetical document for query, and then embed that.
chains.llm.LLMChain
Chain to run queries against LLMs.
chains.llm_checker.base.LLMCheckerChain
Chain for question-answering with self-verification.
chains.llm_math.base.LLMMathChain
Chain that interprets a prompt and executes python code to do math.
chains.llm_requests.LLMRequestsChain
Chain that requests a URL and then uses an LLM to parse results.
chains.llm_summarization_checker.base.LLMSummarizationCheckerChain
Chain for question-answering with self-verification.
chains.mapreduce.MapReduceChain
Map-reduce chain.
chains.moderation.OpenAIModerationChain
Pass input through a moderation endpoint.
chains.natbot.base.NatBotChain
Implement an LLM driven browser.
chains.natbot.crawler.Crawler()
A crawler for web pages.
chains.natbot.crawler.ElementInViewPort | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-9 | A crawler for web pages.
chains.natbot.crawler.ElementInViewPort
A typed dictionary containing information about elements in the viewport.
chains.openai_functions.citation_fuzzy_match.FactWithEvidence
Class representing a single statement.
chains.openai_functions.citation_fuzzy_match.QuestionAnswer
A question and its answer as a list of facts each one should have a source.
chains.openai_functions.openapi.SimpleRequestChain
Chain for making a simple request to an API endpoint.
chains.openai_functions.qa_with_structure.AnswerWithSources
An answer to the question, with sources.
chains.prompt_selector.BasePromptSelector
Base class for prompt selectors.
chains.prompt_selector.ConditionalPromptSelector
Prompt collection that goes through conditionals.
chains.qa_generation.base.QAGenerationChain
Base class for question-answer generation chains.
chains.qa_with_sources.base.BaseQAWithSourcesChain
Question answering chain with sources over documents.
chains.qa_with_sources.base.QAWithSourcesChain
Question answering with sources over documents.
chains.qa_with_sources.loading.LoadingCallable(...)
Interface for loading the combine documents chain.
chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain
Question-answering with sources over an index.
chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain
Question-answering with sources over a vector database.
chains.query_constructor.base.StructuredQueryOutputParser
Output parser that parses a structured query.
chains.query_constructor.ir.Comparator(value)
Enumerator of the comparison operators.
chains.query_constructor.ir.Comparison
A comparison to a value.
chains.query_constructor.ir.Expr
Base class for all expressions.
chains.query_constructor.ir.FilterDirective
A filtering expression.
chains.query_constructor.ir.Operation
A logical operation over other directives.
chains.query_constructor.ir.Operator(value)
Enumerator of the operations.
chains.query_constructor.ir.StructuredQuery | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-10 | Enumerator of the operations.
chains.query_constructor.ir.StructuredQuery
A structured query.
chains.query_constructor.ir.Visitor()
Defines interface for IR translation using visitor pattern.
chains.query_constructor.parser.ISO8601Date
A date in ISO 8601 format (YYYY-MM-DD).
chains.query_constructor.schema.AttributeInfo
Information about a data source attribute.
chains.retrieval_qa.base.BaseRetrievalQA
Base class for question-answering chains.
chains.retrieval_qa.base.RetrievalQA
Chain for question-answering against an index.
chains.retrieval_qa.base.VectorDBQA
Chain for question-answering against a vector database.
chains.router.base.MultiRouteChain
Use a single chain to route an input to one of multiple candidate chains.
chains.router.base.Route(destination, ...)
Create new instance of Route(destination, next_inputs)
chains.router.base.RouterChain
Chain that outputs the name of a destination chain and the inputs to it.
chains.router.embedding_router.EmbeddingRouterChain
Chain that uses embeddings to route between options.
chains.router.llm_router.LLMRouterChain
A router chain that uses an LLM chain to perform routing.
chains.router.llm_router.RouterOutputParser
Parser for output of router chain in the multi-prompt chain.
chains.router.multi_prompt.MultiPromptChain
A multi-route chain that uses an LLM router chain to choose amongst prompts.
chains.router.multi_retrieval_qa.MultiRetrievalQAChain
A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains.
chains.sequential.SequentialChain
Chain where the outputs of one chain feed directly into next.
chains.sequential.SimpleSequentialChain
Simple chain where the outputs of one step feed directly into next.
chains.sql_database.query.SQLInput
Input for a SQL Chain. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-11 | chains.sql_database.query.SQLInput
Input for a SQL Chain.
chains.sql_database.query.SQLInputWithTables
Input for a SQL Chain.
chains.transform.TransformChain
Chain that transforms the chain output.
Functions¶
chains.combine_documents.reduce.acollapse_docs(...)
Execute a collapse function on a set of documents and merge their metadatas.
chains.combine_documents.reduce.collapse_docs(...)
Execute a collapse function on a set of documents and merge their metadatas.
chains.combine_documents.reduce.split_list_of_docs(...)
Split Documents into subsets that each meet a cumulative length constraint.
chains.combine_documents.stuff.create_stuff_documents_chain(...)
Create a chain for passing a list of Documents to a model.
chains.ernie_functions.base.convert_python_function_to_ernie_function(...)
Convert a Python function to an Ernie function-calling API compatible dict.
chains.ernie_functions.base.convert_to_ernie_function(...)
Convert a raw function/class to an Ernie function.
chains.ernie_functions.base.create_ernie_fn_chain(...)
[Legacy] Create an LLM chain that uses Ernie functions.
chains.ernie_functions.base.create_ernie_fn_runnable(...)
Create a runnable sequence that uses Ernie functions.
chains.ernie_functions.base.create_structured_output_chain(...)
[Legacy] Create an LLMChain that uses an Ernie function to get a structured output.
chains.ernie_functions.base.create_structured_output_runnable(...)
Create a runnable that uses an Ernie function to get a structured output.
chains.ernie_functions.base.get_ernie_output_parser(...)
Get the appropriate function output parser given the user functions.
chains.example_generator.generate_example(...)
Return another example given a list of examples for a prompt.
chains.graph_qa.cypher.construct_schema(...)
Filter the schema based on included or excluded types | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-12 | chains.graph_qa.cypher.construct_schema(...)
Filter the schema based on included or excluded types
chains.graph_qa.cypher.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.falkordb.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.gremlin.extract_gremlin(text)
Extract Gremlin code from a text.
chains.graph_qa.neptune_cypher.extract_cypher(text)
Extract Cypher code from text using Regex.
chains.graph_qa.neptune_cypher.trim_query(query)
Trim the query to only include Cypher keywords.
chains.graph_qa.neptune_cypher.use_simple_prompt(llm)
Decides whether to use the simple prompt
chains.graph_qa.neptune_sparql.extract_sparql(query)
chains.history_aware_retriever.create_history_aware_retriever(...)
Create a chain that takes conversation history and returns documents.
chains.loading.load_chain(path, **kwargs)
Unified method for loading a chain from LangChainHub or local fs.
chains.loading.load_chain_from_config(...)
Load chain from Config Dict.
chains.openai_functions.base.create_openai_fn_chain(...)
[Deprecated] [Legacy] Create an LLM chain that uses OpenAI functions.
chains.openai_functions.base.create_structured_output_chain(...)
[Deprecated] [Legacy] Create an LLMChain that uses an OpenAI function to get a structured output.
chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm)
Create a citation fuzzy match chain.
chains.openai_functions.extraction.create_extraction_chain(...)
Creates a chain that extracts information from a passage.
chains.openai_functions.extraction.create_extraction_chain_pydantic(...)
Creates a chain that extracts information from a passage using pydantic schema. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-13 | Creates a chain that extracts information from a passage using pydantic schema.
chains.openai_functions.openapi.get_openapi_chain(spec)
Create a chain for querying an API from a OpenAPI spec.
chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec)
Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI
chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm)
Create a question answering chain that returns an answer with sources.
chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(...)
Create a question answering chain that returns an answer with sources
chains.openai_functions.tagging.create_tagging_chain(...)
Creates a chain that extracts information from a passage
chains.openai_functions.tagging.create_tagging_chain_pydantic(...)
Creates a chain that extracts information from a passage
chains.openai_functions.utils.get_llm_kwargs(...)
Returns the kwargs for the LLMChain constructor.
chains.openai_tools.extraction.create_extraction_chain_pydantic(...)
Creates a chain that extracts information from a passage.
chains.prompt_selector.is_chat_model(llm)
Check if the language model is a chat model.
chains.prompt_selector.is_llm(llm)
Check if the language model is a LLM.
chains.qa_with_sources.loading.load_qa_with_sources_chain(llm)
Load a question answering with sources chain.
chains.query_constructor.base.construct_examples(...)
Construct examples from input-output pairs.
chains.query_constructor.base.fix_filter_directive(...)
Fix invalid filter directive.
chains.query_constructor.base.get_query_constructor_prompt(...)
Create query construction prompt.
chains.query_constructor.base.load_query_constructor_chain(...)
Load a query constructor chain.
chains.query_constructor.base.load_query_constructor_runnable(...)
Load a query constructor runnable chain.
chains.query_constructor.parser.get_parser([...])
Returns a parser for the query language. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-14 | chains.query_constructor.parser.get_parser([...])
Returns a parser for the query language.
chains.query_constructor.parser.v_args(...)
Dummy decorator for when lark is not installed.
chains.retrieval.create_retrieval_chain(...)
Create retrieval chain that retrieves documents and then passes them on.
chains.sql_database.query.create_sql_query_chain(llm, db)
Create a chain that generates SQL queries.
chains.structured_output.base.create_openai_fn_runnable(...)
Create a runnable sequence that uses OpenAI functions.
chains.structured_output.base.create_structured_output_runnable(...)
Create a runnable for extracting structured outputs.
chains.structured_output.base.get_openai_output_parser(...)
Get the appropriate function output parser given the user functions.
langchain.embeddings¶
Embedding models are wrappers around embedding models
from different APIs and services.
Embedding models can be LLMs or not.
Class hierarchy:
Embeddings --> <name>Embeddings # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings
Classes¶
embeddings.cache.CacheBackedEmbeddings(...)
Interface for caching results from embedding models.
langchain.evaluation¶
Evaluation chains for grading LLM and Chain outputs.
This module contains off-the-shelf evaluation chains for grading the output of
LangChain primitives such as language models and chains.
Loading an evaluator
To load an evaluator, you can use the load_evaluators or
load_evaluator functions with the
names of the evaluators to load.
from langchain.evaluation import load_evaluator
evaluator = load_evaluator("qa")
evaluator.evaluate_strings(
prediction="We sold more than 40,000 units last week",
input="How many units did we sell last week?",
reference="We sold 32,378 units",
)
The evaluator must be one of EvaluatorType.
Datasets | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-15 | )
The evaluator must be one of EvaluatorType.
Datasets
To load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the
name of the dataset to load.
from langchain.evaluation import load_dataset
ds = load_dataset("llm-math")
Some common use cases for evaluation include:
Grading the accuracy of a response against ground truth answers: QAEvalChain
Comparing the output of two models: PairwiseStringEvalChain or LabeledPairwiseStringEvalChain when there is additionally a reference label.
Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain
Checking whether an output complies with a set of criteria: CriteriaEvalChain or LabeledCriteriaEvalChain when there is additionally a reference label.
Computing semantic difference between a prediction and reference: EmbeddingDistanceEvalChain or between two predictions: PairwiseEmbeddingDistanceEvalChain
Measuring the string distance between a prediction and reference StringDistanceEvalChain or between two predictions PairwiseStringDistanceEvalChain
Low-level API
These evaluators implement one of the following interfaces:
StringEvaluator: Evaluate a prediction string against a reference label and/or input context.
PairwiseStringEvaluator: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.
AgentTrajectoryEvaluator Evaluate the full sequence of actions taken by an agent.
These interfaces enable easier composability and usage within a higher level evaluation framework.
Classes¶
evaluation.agents.trajectory_eval_chain.TrajectoryEval
A named tuple containing the score and reasoning for a trajectory.
evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain
A chain for evaluating ReAct style agents.
evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser
Trajectory output parser. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-16 | evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser
Trajectory output parser.
evaluation.comparison.eval_chain.LabeledPairwiseStringEvalChain
A chain for comparing two outputs, such as the outputs
evaluation.comparison.eval_chain.PairwiseStringEvalChain
A chain for comparing two outputs, such as the outputs
evaluation.comparison.eval_chain.PairwiseStringResultOutputParser
A parser for the output of the PairwiseStringEvalChain.
evaluation.criteria.eval_chain.Criteria(value)
A Criteria to evaluate.
evaluation.criteria.eval_chain.CriteriaEvalChain
LLM Chain for evaluating runs against criteria.
evaluation.criteria.eval_chain.CriteriaResultOutputParser
A parser for the output of the CriteriaEvalChain.
evaluation.criteria.eval_chain.LabeledCriteriaEvalChain
Criteria evaluation chain that requires references.
evaluation.embedding_distance.base.EmbeddingDistance(value)
Embedding Distance Metric.
evaluation.embedding_distance.base.EmbeddingDistanceEvalChain
Use embedding distances to score semantic difference between a prediction and reference.
evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain
Use embedding distances to score semantic difference between two predictions.
evaluation.exact_match.base.ExactMatchStringEvaluator(*)
Compute an exact match between the prediction and the reference.
evaluation.parsing.base.JsonEqualityEvaluator([...])
Evaluates whether the prediction is equal to the reference after
evaluation.parsing.base.JsonValidityEvaluator(...)
Evaluates whether the prediction is valid JSON.
evaluation.parsing.json_distance.JsonEditDistanceEvaluator([...])
An evaluator that calculates the edit distance between JSON strings.
evaluation.parsing.json_schema.JsonSchemaEvaluator(...)
An evaluator that validates a JSON prediction against a JSON schema reference.
evaluation.qa.eval_chain.ContextQAEvalChain
LLM Chain for evaluating QA w/o GT based on context
evaluation.qa.eval_chain.CotQAEvalChain
LLM Chain for evaluating QA using chain of thought reasoning. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-17 | LLM Chain for evaluating QA using chain of thought reasoning.
evaluation.qa.eval_chain.QAEvalChain
LLM Chain for evaluating question answering.
evaluation.qa.generate_chain.QAGenerateChain
LLM Chain for generating examples for question answering.
evaluation.regex_match.base.RegexMatchStringEvaluator(*)
Compute a regex match between the prediction and the reference.
evaluation.schema.AgentTrajectoryEvaluator()
Interface for evaluating agent trajectories.
evaluation.schema.EvaluatorType(value)
The types of the evaluators.
evaluation.schema.LLMEvalChain
A base class for evaluators that use an LLM.
evaluation.schema.PairwiseStringEvaluator()
Compare the output of two models (or two outputs of the same model).
evaluation.schema.StringEvaluator()
Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.
evaluation.scoring.eval_chain.LabeledScoreStringEvalChain
A chain for scoring the output of a model on a scale of 1-10.
evaluation.scoring.eval_chain.ScoreStringEvalChain
A chain for scoring on a scale of 1-10 the output of a model.
evaluation.scoring.eval_chain.ScoreStringResultOutputParser
A parser for the output of the ScoreStringEvalChain.
evaluation.string_distance.base.PairwiseStringDistanceEvalChain
Compute string edit distances between two predictions.
evaluation.string_distance.base.StringDistance(value)
Distance metric to use.
evaluation.string_distance.base.StringDistanceEvalChain
Compute string distances between the prediction and the reference.
Functions¶
evaluation.comparison.eval_chain.resolve_pairwise_criteria(...)
Resolve the criteria for the pairwise evaluator.
evaluation.criteria.eval_chain.resolve_criteria(...)
Resolve the criteria to evaluate.
evaluation.loading.load_dataset(uri)
Load a dataset from the LangChainDatasets on HuggingFace.
evaluation.loading.load_evaluator(evaluator, *)
Load the requested evaluation chain specified by a string. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-18 | Load the requested evaluation chain specified by a string.
evaluation.loading.load_evaluators(evaluators, *)
Load evaluators specified by a list of evaluator types.
evaluation.scoring.eval_chain.resolve_criteria(...)
Resolve the criteria for the pairwise evaluator.
langchain.hub¶
Interface with the LangChain Hub.
Functions¶
hub.pull(owner_repo_commit, *[, api_url, ...])
Pulls an object from the hub and returns it as a LangChain object.
hub.push(repo_full_name, object, *[, ...])
Pushes an object to the hub and returns the URL it can be viewed at in a browser.
langchain.indexes¶
Index is used to avoid writing duplicated content
into the vectostore and to avoid over-writing content if it’s unchanged.
Indexes also :
Create knowledge graphs from data.
Support indexing workflows from LangChain data loaders to vectorstores.
Importantly, Index keeps on working even if the content being written is derived
via a set of transformations from some source content (e.g., indexing children
documents that were derived from parent documents by chunking.)
Classes¶
indexes.base.RecordManager(namespace)
An abstract base class representing the interface for a record manager.
indexes.graph.GraphIndexCreator
Functionality to create graph index.
indexes.vectorstore.VectorStoreIndexWrapper
Wrapper around a vectorstore for easy access.
indexes.vectorstore.VectorstoreIndexCreator
Logic for creating indexes.
langchain.memory¶
Memory maintains Chain state, incorporating context from past runs.
Class hierarchy for Memory:
BaseMemory --> BaseChatMemory --> <name>Memory # Examples: ZepMemory, MotorheadMemory
Main helpers:
BaseChatMessageHistory
Chat Message History stores the chat message history in different stores.
Class hierarchy for ChatMessageHistory: | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-19 | Class hierarchy for ChatMessageHistory:
BaseChatMessageHistory --> <name>ChatMessageHistory # Example: ZepChatMessageHistory
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
memory.buffer.ConversationBufferMemory
Buffer for storing conversation memory.
memory.buffer.ConversationStringBufferMemory
Buffer for storing conversation memory.
memory.buffer_window.ConversationBufferWindowMemory
Buffer for storing conversation memory inside a limited size window.
memory.chat_memory.BaseChatMemory
Abstract base class for chat memory.
memory.combined.CombinedMemory
Combining multiple memories' data together.
memory.entity.BaseEntityStore
Abstract base class for Entity store.
memory.entity.ConversationEntityMemory
Entity extractor & summarizer memory.
memory.entity.InMemoryEntityStore
In-memory Entity store.
memory.entity.RedisEntityStore
Redis-backed Entity store.
memory.entity.SQLiteEntityStore
SQLite-backed Entity store
memory.entity.UpstashRedisEntityStore
Upstash Redis backed Entity store.
memory.kg.ConversationKGMemory
Knowledge graph conversation memory.
memory.motorhead_memory.MotorheadMemory
Chat message memory backed by Motorhead service.
memory.readonly.ReadOnlySharedMemory
A memory wrapper that is read-only and cannot be changed.
memory.simple.SimpleMemory
Simple memory for storing context or other information that shouldn't ever change between prompts.
memory.summary.ConversationSummaryMemory
Conversation summarizer to chat memory.
memory.summary.SummarizerMixin
Mixin for summarizer.
memory.summary_buffer.ConversationSummaryBufferMemory
Buffer with summarizer for storing conversation memory.
memory.token_buffer.ConversationTokenBufferMemory
Conversation chat memory with token limit.
memory.vectorstore.VectorStoreRetrieverMemory
VectorStoreRetriever-backed memory.
memory.zep_memory.ZepMemory
Persist your chain history to the Zep MemoryStore.
Functions¶ | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-20 | Persist your chain history to the Zep MemoryStore.
Functions¶
memory.utils.get_prompt_input_key(inputs, ...)
Get the prompt input key.
langchain.model_laboratory¶
Experiment with different models.
Classes¶
model_laboratory.ModelLaboratory(chains[, names])
Experiment with different models.
langchain.output_parsers¶
OutputParser classes parse the output of an LLM call.
Class hierarchy:
BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser # ListOutputParser, PydanticOutputParser
Main helpers:
Serializable, Generation, PromptValue
Classes¶
output_parsers.boolean.BooleanOutputParser
Parse the output of an LLM call to a boolean.
output_parsers.combining.CombiningOutputParser
Combine multiple output parsers into one.
output_parsers.datetime.DatetimeOutputParser
Parse the output of an LLM call to a datetime.
output_parsers.enum.EnumOutputParser
Parse an output that is one of a set of values.
output_parsers.fix.OutputFixingParser
Wraps a parser and tries to fix parsing errors.
output_parsers.pandas_dataframe.PandasDataFrameOutputParser
Parse an output using Pandas DataFrame format.
output_parsers.regex.RegexParser
Parse the output of an LLM call using a regex.
output_parsers.regex_dict.RegexDictParser
Parse the output of an LLM call into a Dictionary using a regex.
output_parsers.retry.RetryOutputParser
Wraps a parser and tries to fix parsing errors.
output_parsers.retry.RetryWithErrorOutputParser
Wraps a parser and tries to fix parsing errors.
output_parsers.structured.ResponseSchema
A schema for a response from a structured output parser.
output_parsers.structured.StructuredOutputParser
Parse the output of an LLM call to a structured output. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-21 | Parse the output of an LLM call to a structured output.
output_parsers.yaml.YamlOutputParser
Parse YAML output using a pydantic model.
Functions¶
output_parsers.loading.load_output_parser(config)
Load an output parser.
langchain.retrievers¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to
store documents, only to return (or retrieve) it. Vector stores can be used as
the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever
Main helpers:
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes¶
retrievers.contextual_compression.ContextualCompressionRetriever
Retriever that wraps a base retriever and compresses the results.
retrievers.document_compressors.base.DocumentCompressorPipeline
Document compressor that uses a pipeline of Transformers.
retrievers.document_compressors.chain_extract.LLMChainExtractor
Document compressor that uses an LLM chain to extract the relevant parts of documents.
retrievers.document_compressors.chain_extract.NoOutputParser
Parse outputs that could return a null string of some sort.
retrievers.document_compressors.chain_filter.LLMChainFilter
Filter that drops documents that aren't relevant to the query.
retrievers.document_compressors.cohere_rerank.CohereRerank
[Deprecated] Document compressor that uses Cohere Rerank API.
retrievers.document_compressors.cross_encoder_rerank.CrossEncoderReranker
Document compressor that uses CrossEncoder for reranking.
retrievers.document_compressors.embeddings_filter.EmbeddingsFilter | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-22 | retrievers.document_compressors.embeddings_filter.EmbeddingsFilter
Document compressor that uses embeddings to drop documents unrelated to the query.
retrievers.document_compressors.flashrank_rerank.FlashrankRerank
Document compressor using Flashrank interface.
retrievers.ensemble.EnsembleRetriever
Retriever that ensembles the multiple retrievers.
retrievers.merger_retriever.MergerRetriever
Retriever that merges the results of multiple retrievers.
retrievers.multi_query.LineListOutputParser
Output parser for a list of lines.
retrievers.multi_query.MultiQueryRetriever
Given a query, use an LLM to write a set of queries.
retrievers.multi_vector.MultiVectorRetriever
Retrieve from a set of multiple embeddings for the same document.
retrievers.multi_vector.SearchType(value)
Enumerator of the types of search to perform.
retrievers.parent_document_retriever.ParentDocumentRetriever
Retrieve small chunks then retrieve their parent documents.
retrievers.re_phraser.RePhraseQueryRetriever
Given a query, use an LLM to re-phrase it.
retrievers.self_query.astradb.AstraDBTranslator()
Translate AstraDB internal query language elements to valid filters.
retrievers.self_query.base.SelfQueryRetriever
Retriever that uses a vector store and an LLM to generate the vector store queries.
retrievers.self_query.chroma.ChromaTranslator()
Translate Chroma internal query language elements to valid filters.
retrievers.self_query.dashvector.DashvectorTranslator()
Logic for converting internal query language elements to valid filters.
retrievers.self_query.deeplake.DeepLakeTranslator()
Translate DeepLake internal query language elements to valid filters.
retrievers.self_query.dingo.DingoDBTranslator() | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-23 | retrievers.self_query.dingo.DingoDBTranslator()
Translate DingoDB internal query language elements to valid filters.
retrievers.self_query.elasticsearch.ElasticsearchTranslator()
Translate Elasticsearch internal query language elements to valid filters.
retrievers.self_query.milvus.MilvusTranslator()
Translate Milvus internal query language elements to valid filters.
retrievers.self_query.mongodb_atlas.MongoDBAtlasTranslator()
Translate Mongo internal query language elements to valid filters.
retrievers.self_query.myscale.MyScaleTranslator([...])
Translate MyScale internal query language elements to valid filters.
retrievers.self_query.opensearch.OpenSearchTranslator()
Translate OpenSearch internal query domain-specific language elements to valid filters.
retrievers.self_query.pgvector.PGVectorTranslator()
Translate PGVector internal query language elements to valid filters.
retrievers.self_query.pinecone.PineconeTranslator()
Translate Pinecone internal query language elements to valid filters.
retrievers.self_query.qdrant.QdrantTranslator(...)
Translate Qdrant internal query language elements to valid filters.
retrievers.self_query.redis.RedisTranslator(schema)
Visitor for translating structured queries to Redis filter expressions.
retrievers.self_query.supabase.SupabaseVectorTranslator()
Translate Langchain filters to Supabase PostgREST filters.
retrievers.self_query.tencentvectordb.TencentVectorDBTranslator([...])
retrievers.self_query.timescalevector.TimescaleVectorTranslator()
Translate the internal query language elements to valid filters.
retrievers.self_query.vectara.VectaraTranslator()
Translate Vectara internal query language elements to valid filters.
retrievers.self_query.weaviate.WeaviateTranslator()
Translate Weaviate internal query language elements to valid filters.
retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-24 | retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever
Retriever that combines embedding similarity with recency in retrieving values.
retrievers.web_research.QuestionListOutputParser
Output parser for a list of numbered questions.
retrievers.web_research.SearchQueries
Search queries to research for the user's goal.
retrievers.web_research.WebResearchRetriever
Google Search API retriever.
Functions¶
retrievers.document_compressors.chain_extract.default_get_input(...)
Return the compression chain input.
retrievers.document_compressors.chain_filter.default_get_input(...)
Return the compression chain input.
retrievers.ensemble.unique_by_key(iterable, key)
retrievers.self_query.deeplake.can_cast_to_float(string)
Check if a string can be cast to a float.
retrievers.self_query.milvus.process_value(...)
Convert a value to a string and add double quotes if it is a string.
retrievers.self_query.vectara.process_value(value)
Convert a value to a string and add single quotes if it is a string.
langchain.runnables¶
LangChain Runnable and the LangChain Expression Language (LCEL).
The LangChain Expression Language (LCEL) offers a declarative method to build
production-grade programs that harness the power of LLMs.
Programs created using LCEL and LangChain Runnables inherently support
synchronous, asynchronous, batch, and streaming operations.
Support for async allows servers hosting the LCEL based programs
to scale better for higher concurrent loads.
Batch operations allow for processing multiple inputs in parallel.
Streaming of intermediate outputs, as they’re being generated, allows for
creating more responsive UX.
This module contains non-core Runnable classes.
Classes¶
runnables.hub.HubRunnable | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-25 | Classes¶
runnables.hub.HubRunnable
An instance of a runnable stored in the LangChain Hub.
runnables.openai_functions.OpenAIFunction
A function description for ChatOpenAI
runnables.openai_functions.OpenAIFunctionsRouter
A runnable that routes to the selected function.
langchain.smith¶
LangSmith utilities.
This module provides utilities for connecting to LangSmith. For more information on LangSmith, see the LangSmith documentation.
Evaluation
LangSmith helps you evaluate Chains and other language model application components using a number of LangChain evaluators.
An example of this is shown below, assuming you’ve created a LangSmith dataset called <my_dataset_name>:
from langsmith import Client
from langchain_community.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, run_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = RunEvalConfig(
evaluators=[
"qa", # "Correctness" against a reference answer
"embedding_distance",
RunEvalConfig.Criteria("helpfulness"),
RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain, | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-26 | client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the
StringEvaluator
or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluator
class MyStringEvaluator(StringEvaluator):
@property
def requires_input(self) -> bool:
return False
@property
def requires_reference(self) -> bool:
return True
@property
def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
return {"score": prediction == reference}
evaluation_config = RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
Primary Functions
arun_on_dataset: Asynchronous function to evaluate a chain, agent, or other LangChain component over a dataset.
run_on_dataset: Function to evaluate a chain, agent, or other LangChain component over a dataset.
RunEvalConfig: Class representing the configuration for running evaluation. You can select evaluators by EvaluatorType or config, or you can pass in custom_evaluators
Classes¶
smith.evaluation.config.EvalConfig
Configuration for a given run evaluator.
smith.evaluation.config.RunEvalConfig
Configuration for a run evaluation.
smith.evaluation.config.SingleKeyEvalConfig
Configuration for a run evaluator that only requires a single key.
smith.evaluation.progress.ProgressBarCallback(total)
A simple progress bar for the console.
smith.evaluation.runner_utils.ChatModelInput
smith.evaluation.runner_utils.EvalError(...)
Your architecture raised an error. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-27 | smith.evaluation.runner_utils.EvalError(...)
Your architecture raised an error.
smith.evaluation.runner_utils.InputFormatError
Raised when the input format is invalid.
smith.evaluation.runner_utils.TestResult
A dictionary of the results of a single test run.
smith.evaluation.string_run_evaluator.ChainStringRunMapper
Extract items to evaluate from the run object from a chain.
smith.evaluation.string_run_evaluator.LLMStringRunMapper
Extract items to evaluate from the run object.
smith.evaluation.string_run_evaluator.StringExampleMapper
Map an example, or row in the dataset, to the inputs of an evaluation.
smith.evaluation.string_run_evaluator.StringRunEvaluatorChain
Evaluate Run and optional examples.
smith.evaluation.string_run_evaluator.StringRunMapper
Extract items to evaluate from the run object.
smith.evaluation.string_run_evaluator.ToolStringRunMapper
Map an input to the tool.
Functions¶
smith.evaluation.name_generation.random_name()
Generate a random name.
smith.evaluation.runner_utils.arun_on_dataset(...)
Run the Chain or language model on a dataset and store traces to the specified project name.
smith.evaluation.runner_utils.run_on_dataset(...)
Run the Chain or language model on a dataset and store traces to the specified project name.
langchain.storage¶
Implementations of key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform
to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Classes¶
storage.encoder_backed.EncoderBackedStore(...)
Wraps a store with key and value encoders/decoders.
storage.file_system.LocalFileStore(root_path, *)
BaseStore interface that works on the local file system.
storage.in_memory.InMemoryBaseStore()
In-memory implementation of the BaseStore using a dictionary.
langchain.tools¶ | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
34d68656a3c3-28 | In-memory implementation of the BaseStore using a dictionary.
langchain.tools¶
Tools are classes that an Agent uses to interact with the world.
Each tool has a description. Agent uses the description to choose the right
tool for the job.
Class hierarchy:
ToolMetaclass --> BaseTool --> <name>Tool # Examples: AIPluginTool, BaseGraphQLTool
<name> # Examples: BraveSearch, HumanInputRun
Main helpers:
CallbackManagerForToolRun, AsyncCallbackManagerForToolRun
Classes¶
tools.retriever.RetrieverInput
Input to the retriever.
Functions¶
tools.render.render_text_description(tools)
Render the tool name and description in plain text.
tools.render.render_text_description_and_args(tools)
Render the tool name, description, and args in plain text.
tools.retriever.create_retriever_tool(...[, ...])
Create a tool to do retrieval of documents.
langchain.utils¶
Utility functions for LangChain.
These functions do not depend on any other LangChain module.
Functions¶
utils.interactive_env.is_interactive_env()
Determine if running within IPython or Jupyter. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
986976d85988-0 | langchain_google_vertexai 0.1.3¶
langchain_google_vertexai.callbacks¶
Classes¶
callbacks.VertexAICallbackHandler()
Callback Handler that tracks VertexAI info.
langchain_google_vertexai.chains¶
Functions¶
chains.create_structured_runnable(function, ...)
Create a runnable sequence that uses OpenAI functions.
chains.get_output_parser(functions)
Get the appropriate function output parser given the user functions.
langchain_google_vertexai.chat_models¶
Wrapper around Google VertexAI chat-based models.
Classes¶
chat_models.ChatVertexAI
Vertex AI Chat large language models API.
langchain_google_vertexai.embeddings¶
Classes¶
embeddings.GoogleEmbeddingModelType(value)
An enumeration.
embeddings.VertexAIEmbeddings
Google Cloud VertexAI embedding models.
langchain_google_vertexai.functions_utils¶
Classes¶
functions_utils.ParametersSchema
This is a schema of currently supported definitions in function calling.
functions_utils.PydanticFunctionsOutputParser
Parse an output as a pydantic object.
langchain_google_vertexai.gemma¶
Classes¶
gemma.GemmaChatLocalHF
Create a new model by parsing and validating input data from keyword arguments.
gemma.GemmaChatLocalKaggle
Create a new model by parsing and validating input data from keyword arguments.
gemma.GemmaChatVertexAIModelGarden
Create a new model by parsing and validating input data from keyword arguments.
gemma.GemmaLocalHF
Local gemma model loaded from HuggingFace.
gemma.GemmaLocalKaggle
Local gemma chat model loaded from Kaggle.
gemma.GemmaVertexAIModelGarden
Create a new model by parsing and validating input data from keyword arguments.
Functions¶
gemma.gemma_messages_to_prompt(history)
Converts a list of messages to a chat prompt for Gemma. | https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html |
986976d85988-1 | Converts a list of messages to a chat prompt for Gemma.
langchain_google_vertexai.llms¶
Classes¶
llms.VertexAI
Google Vertex AI large language models.
langchain_google_vertexai.model_garden¶
Classes¶
model_garden.ChatAnthropicVertex
Create a new model by parsing and validating input data from keyword arguments.
model_garden.VertexAIModelGarden
Large language models served from Vertex AI Model Garden.
langchain_google_vertexai.vectorstores¶
Classes¶
vectorstores.document_storage.DataStoreDocumentStorage(...)
Stores documents in Google Cloud DataStore.
vectorstores.document_storage.DocumentStorage()
Abstract interface of a key, text storage for retrieving documents.
vectorstores.document_storage.GCSDocumentStorage(bucket)
Stores documents in Google Cloud Storage.
vectorstores.vectorstores.VectorSearchVectorStore(...)
VertexAI VectorStore that handles the search and indexing using Vector Search and stores the documents in Google Cloud Storage.
vectorstores.vectorstores.VectorSearchVectorStoreDatastore(...)
VectorSearch with DatasTore document storage.
vectorstores.vectorstores.VectorSearchVectorStoreGCS(...)
Alias of VectorSearchVectorStore for consistency with the rest of vector stores with different document storage backends.
langchain_google_vertexai.vision_models¶
Classes¶
vision_models.VertexAIImageCaptioning
Implementation of the Image Captioning model as an LLM.
vision_models.VertexAIImageCaptioningChat
Implementation of the Image Captioning model as a chat.
vision_models.VertexAIImageEditorChat
Given an image and a prompt, edits the image.
vision_models.VertexAIImageGeneratorChat
Generates an image from a prompt.
vision_models.VertexAIVisualQnAChat
Chat implementation of a visual QnA model | https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html |
97eb3ca97aef-0 | langchain_mistralai 0.1.2¶
langchain_mistralai.chat_models¶
Classes¶
chat_models.ChatMistralAI
A chat model that uses the MistralAI API.
Functions¶
chat_models.acompletion_with_retry(llm[, ...])
Use tenacity to retry the async completion call.
langchain_mistralai.embeddings¶
Classes¶
embeddings.MistralAIEmbeddings
MistralAI embedding models. | https://api.python.langchain.com/en/latest/mistralai_api_reference.html |
3b6a9250d67e-0 | langchain_text_splitters 0.0.1¶
langchain_text_splitters.base¶
Classes¶
base.Language(value)
Enum of the programming languages.
base.TextSplitter(chunk_size, chunk_overlap, ...)
Interface for splitting text into chunks.
base.TokenTextSplitter([encoding_name, ...])
Splitting text to tokens using model tokenizer.
base.Tokenizer(chunk_overlap, ...)
Tokenizer data class.
Functions¶
base.split_text_on_tokens(*, text, tokenizer)
Split incoming text and return chunks using tokenizer.
langchain_text_splitters.character¶
Classes¶
character.CharacterTextSplitter([separator, ...])
Splitting text that looks at characters.
character.RecursiveCharacterTextSplitter([...])
Splitting text by recursively look at characters.
langchain_text_splitters.html¶
Classes¶
html.ElementType
Element type as typed dict.
html.HTMLHeaderTextSplitter(headers_to_split_on)
Splitting HTML files based on specified headers.
langchain_text_splitters.json¶
Classes¶
json.RecursiveJsonSplitter([max_chunk_size, ...])
langchain_text_splitters.konlpy¶
Classes¶
konlpy.KonlpyTextSplitter([separator])
Splitting text using Konlpy package.
langchain_text_splitters.latex¶
Classes¶
latex.LatexTextSplitter(**kwargs)
Attempts to split the text along Latex-formatted layout elements.
langchain_text_splitters.markdown¶
Classes¶
markdown.HeaderType
Header type as typed dict.
markdown.LineType
Line type as typed dict.
markdown.MarkdownHeaderTextSplitter(...[, ...])
Splitting markdown files based on specified headers.
markdown.MarkdownTextSplitter(**kwargs) | https://api.python.langchain.com/en/latest/text_splitters_api_reference.html |
3b6a9250d67e-1 | Splitting markdown files based on specified headers.
markdown.MarkdownTextSplitter(**kwargs)
Attempts to split the text along Markdown-formatted headings.
langchain_text_splitters.nltk¶
Classes¶
nltk.NLTKTextSplitter([separator, language])
Splitting text using NLTK package.
langchain_text_splitters.python¶
Classes¶
python.PythonCodeTextSplitter(**kwargs)
Attempts to split the text along Python syntax.
langchain_text_splitters.sentence_transformers¶
Classes¶
sentence_transformers.SentenceTransformersTokenTextSplitter([...])
Splitting text to tokens using sentence model tokenizer.
langchain_text_splitters.spacy¶
Classes¶
spacy.SpacyTextSplitter([separator, ...])
Splitting text using Spacy package. | https://api.python.langchain.com/en/latest/text_splitters_api_reference.html |
b5a65e2678ef-0 | langchain_exa 0.0.1¶
langchain_exa.retrievers¶
Classes¶
retrievers.ExaSearchRetriever
Exa Search retriever.
langchain_exa.tools¶
Tool for the Exa Search API.
Classes¶
tools.ExaFindSimilarResults
Tool that queries the Metaphor Search API and gets back json.
tools.ExaSearchResults
Tool that queries the Metaphor Search API and gets back json. | https://api.python.langchain.com/en/latest/exa_api_reference.html |
d0aa8d55b701-0 | langchain_nomic 0.0.2¶
langchain_nomic.embeddings¶
Classes¶
embeddings.NomicEmbeddings(*, model[, ...])
NomicEmbeddings embedding model. | https://api.python.langchain.com/en/latest/nomic_api_reference.html |
d843e9add435-0 | langchain_text_splitters.html.HTMLHeaderTextSplitter¶
class langchain_text_splitters.html.HTMLHeaderTextSplitter(headers_to_split_on: List[Tuple[str, str]], return_each_element: bool = False)[source]¶
Splitting HTML files based on specified headers.
Requires lxml package.
Create a new HTMLHeaderTextSplitter.
Parameters
headers_to_split_on (List[Tuple[str, str]]) – list of tuples of headers we want to track mapped to
(arbitrary) keys for metadata. Allowed header values: h1, h2, h3, h4,
h5, h6 e.g. [(“h1”, “Header 1”), (“h2”, “Header 2)].
return_each_element (bool) – Return each element w/ associated headers.
Methods
__init__(headers_to_split_on[, ...])
Create a new HTMLHeaderTextSplitter.
aggregate_elements_to_chunks(elements)
Combine elements with common metadata into chunks
split_text(text)
Split HTML text string
split_text_from_file(file)
Split HTML file
split_text_from_url(url)
Split HTML from web URL
__init__(headers_to_split_on: List[Tuple[str, str]], return_each_element: bool = False)[source]¶
Create a new HTMLHeaderTextSplitter.
Parameters
headers_to_split_on (List[Tuple[str, str]]) – list of tuples of headers we want to track mapped to
(arbitrary) keys for metadata. Allowed header values: h1, h2, h3, h4,
h5, h6 e.g. [(“h1”, “Header 1”), (“h2”, “Header 2)].
return_each_element (bool) – Return each element w/ associated headers.
aggregate_elements_to_chunks(elements: List[ElementType]) → List[Document][source]¶
Combine elements with common metadata into chunks
Parameters | https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html |
d843e9add435-1 | Combine elements with common metadata into chunks
Parameters
elements (List[ElementType]) – HTML element content with associated identifying info and metadata
Return type
List[Document]
split_text(text: str) → List[Document][source]¶
Split HTML text string
Parameters
text (str) – HTML text
Return type
List[Document]
split_text_from_file(file: Any) → List[Document][source]¶
Split HTML file
Parameters
file (Any) – HTML file
Return type
List[Document]
split_text_from_url(url: str) → List[Document][source]¶
Split HTML from web URL
Parameters
url (str) – web URL
Return type
List[Document] | https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html |
8a7914661192-0 | langchain_text_splitters.html.ElementType¶
class langchain_text_splitters.html.ElementType[source]¶
Element type as typed dict.
url: str¶
xpath: str¶
content: str¶
metadata: Dict[str, str]¶ | https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.ElementType.html |
a6966d817550-0 | langchain.model_laboratory.ModelLaboratory¶
class langchain.model_laboratory.ModelLaboratory(chains: Sequence[Chain], names: Optional[List[str]] = None)[source]¶
Experiment with different models.
Initialize with chains to experiment with.
Parameters
chains (Sequence[Chain]) – list of chains to experiment with.
names (Optional[List[str]]) –
Methods
__init__(chains[, names])
Initialize with chains to experiment with.
compare(text)
Compare model outputs on an input text.
from_llms(llms[, prompt])
Initialize with LLMs to experiment with and optional prompt.
__init__(chains: Sequence[Chain], names: Optional[List[str]] = None)[source]¶
Initialize with chains to experiment with.
Parameters
chains (Sequence[Chain]) – list of chains to experiment with.
names (Optional[List[str]]) –
compare(text: str) → None[source]¶
Compare model outputs on an input text.
If a prompt was provided with starting the laboratory, then this text will be
fed into the prompt. If no prompt was provided, then the input text is the
entire prompt.
Parameters
text (str) – input text to run all models on.
Return type
None
classmethod from_llms(llms: List[BaseLLM], prompt: Optional[PromptTemplate] = None) → ModelLaboratory[source]¶
Initialize with LLMs to experiment with and optional prompt.
Parameters
llms (List[BaseLLM]) – list of LLMs to experiment with
prompt (Optional[PromptTemplate]) – Optional prompt to use to prompt the LLMs. Defaults to None.
If a prompt was provided, it should only have one input variable.
Return type
ModelLaboratory
Examples using ModelLaboratory¶
Manifest
Model comparison | https://api.python.langchain.com/en/latest/model_laboratory/langchain.model_laboratory.ModelLaboratory.html |
7056634e5cdb-0 | langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter¶
class langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any)[source]¶
Splitting text to tokens using sentence model tokenizer.
Create a new TextSplitter.
Methods
__init__([chunk_overlap, model_name, ...])
Create a new TextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a list of documents.
count_tokens(*, text)
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
Parameters
chunk_overlap (int) –
model_name (str) –
tokens_per_chunk (Optional[int]) –
kwargs (Any) –
__init__(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any) → None[source]¶
Create a new TextSplitter.
Parameters
chunk_overlap (int) –
model_name (str) –
tokens_per_chunk (Optional[int]) –
kwargs (Any) –
Return type
None | https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html |
7056634e5cdb-1 | kwargs (Any) –
Return type
None
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a list of documents.
Parameters
documents (Sequence[Document]) – A sequence of Documents to be transformed.
kwargs (Any) –
Returns
A list of transformed Documents.
Return type
Sequence[Document]
count_tokens(*, text: str) → int[source]¶
Parameters
text (str) –
Return type
int
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts.
Parameters
texts (List[str]) –
metadatas (Optional[List[dict]]) –
Return type
List[Document]
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length.
Parameters
tokenizer (Any) –
kwargs (Any) –
Return type
TextSplitter
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
Parameters
encoding_name (str) –
model_name (Optional[str]) –
allowed_special (Union[Literal['all'], ~typing.AbstractSet[str]]) –
disallowed_special (Union[Literal['all'], ~typing.Collection[str]]) –
kwargs (Any) –
Return type
TS
split_documents(documents: Iterable[Document]) → List[Document]¶ | https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html |
7056634e5cdb-2 | TS
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
Parameters
documents (Iterable[Document]) –
Return type
List[Document]
split_text(text: str) → List[str][source]¶
Split text into multiple components.
Parameters
text (str) –
Return type
List[str]
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Parameters
documents (Sequence[Document]) –
kwargs (Any) –
Return type
Sequence[Document]
Examples using SentenceTransformersTokenTextSplitter¶
Split by tokens | https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.