id stringlengths 14 16 | text stringlengths 20 2.3k | source stringlengths 65 166 |
|---|---|---|
7e1354ce2357-0 | langchain_weaviate 0.0.2¶ | https://api.python.langchain.com/en/latest/weaviate_api_reference.html |
06abb06c2812-0 | langchain_unstructured 0.1.3¶
langchain_unstructured.document_loaders¶
Unstructured document loader.
Classes¶
document_loaders.UnstructuredLoader([...])
Unstructured document loader interface. | https://api.python.langchain.com/en/latest/unstructured_api_reference.html |
3d43f3293ea4-0 | langchain_experimental 0.0.65¶
langchain_experimental.agents¶
Agent is a class that uses an LLM to choose
a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents,
a language model is used as a reasoning engine to determine which actions
to take and in which order.
Agents select and use Tools and Toolkits for actions.
Functions¶
agents.agent_toolkits.csv.base.create_csv_agent(...)
Create pandas dataframe agent by loading csv to a dataframe.
agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm, df)
Construct a Pandas agent from an LLM and dataframe(s).
agents.agent_toolkits.python.base.create_python_agent(...)
Construct a python agent from an LLM and tool.
agents.agent_toolkits.spark.base.create_spark_dataframe_agent(llm, df)
Construct a Spark agent from an LLM and dataframe.
agents.agent_toolkits.xorbits.base.create_xorbits_agent(...)
Construct a xorbits agent from an LLM and dataframe.
langchain_experimental.autonomous_agents¶
Autonomous agents in the Langchain experimental package include
[AutoGPT](https://github.com/Significant-Gravitas/AutoGPT),
[BabyAGI](https://github.com/yoheinakajima/babyagi),
and [HuggingGPT](https://arxiv.org/abs/2303.17580) agents that
interact with language models autonomously.
These agents have specific functionalities like memory management,
task creation, execution chains, and response generation.
They differ from ordinary agents by their autonomous decision-making capabilities,
memory handling, and specialized functionalities for tasks and response.
Classes¶
autonomous_agents.autogpt.agent.AutoGPT(...)
Agent for interacting with AutoGPT. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-1 | Agent for interacting with AutoGPT.
autonomous_agents.autogpt.memory.AutoGPTMemory
Memory for AutoGPT.
autonomous_agents.autogpt.output_parser.AutoGPTAction(...)
Action returned by AutoGPTOutputParser.
autonomous_agents.autogpt.output_parser.AutoGPTOutputParser
Output parser for AutoGPT.
autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser
Base Output parser for AutoGPT.
autonomous_agents.autogpt.prompt.AutoGPTPrompt
Prompt for AutoGPT.
autonomous_agents.autogpt.prompt_generator.PromptGenerator()
Generator of custom prompt strings.
autonomous_agents.baby_agi.baby_agi.BabyAGI
Controller model for the BabyAGI agent.
autonomous_agents.baby_agi.task_creation.TaskCreationChain
Chain generating tasks.
autonomous_agents.baby_agi.task_execution.TaskExecutionChain
Chain to execute tasks.
autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain
Chain to prioritize tasks.
autonomous_agents.hugginggpt.hugginggpt.HuggingGPT(...)
Agent for interacting with HuggingGPT.
autonomous_agents.hugginggpt.repsonse_generator.ResponseGenerationChain
Chain to execute tasks.
autonomous_agents.hugginggpt.repsonse_generator.ResponseGenerator(...)
Generates a response based on the input.
autonomous_agents.hugginggpt.task_executor.Task(...)
Task to be executed.
autonomous_agents.hugginggpt.task_executor.TaskExecutor(plan)
Load tools and execute tasks.
autonomous_agents.hugginggpt.task_planner.BasePlanner
Base class for a planner.
autonomous_agents.hugginggpt.task_planner.Plan(steps)
A plan to execute.
autonomous_agents.hugginggpt.task_planner.PlanningOutputParser | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-2 | autonomous_agents.hugginggpt.task_planner.PlanningOutputParser
Parses the output of the planning stage.
autonomous_agents.hugginggpt.task_planner.Step(...)
A step in the plan.
autonomous_agents.hugginggpt.task_planner.TaskPlaningChain
Chain to execute tasks.
autonomous_agents.hugginggpt.task_planner.TaskPlanner
Planner for tasks.
Functions¶
autonomous_agents.autogpt.output_parser.preprocess_json_input(...)
Preprocesses a string to be parsed as json.
autonomous_agents.autogpt.prompt_generator.get_prompt(tools)
Generates a prompt string.
autonomous_agents.hugginggpt.repsonse_generator.load_response_generator(llm)
Load the ResponseGenerator.
autonomous_agents.hugginggpt.task_planner.load_chat_planner(llm)
Load the chat planner.
langchain_experimental.chat_models¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose
is a bit different. Rather than expose a “text in, text out” API, they expose
an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
chat_models.llm_wrapper.ChatWrapper
Wrapper for chat LLMs.
chat_models.llm_wrapper.Llama2Chat
Wrapper for Llama-2-chat model.
chat_models.llm_wrapper.Mixtral
See https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1#instruction-format
chat_models.llm_wrapper.Orca
Wrapper for Orca-style models.
chat_models.llm_wrapper.Vicuna | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-3 | Wrapper for Orca-style models.
chat_models.llm_wrapper.Vicuna
Wrapper for Vicuna-style models.
langchain_experimental.comprehend_moderation¶
Comprehend Moderation is used to detect and handle Personally Identifiable Information (PII),
toxicity, and prompt safety in text.
The Langchain experimental package includes the AmazonComprehendModerationChain class
for the comprehend moderation tasks. It is based on Amazon Comprehend service.
This class can be configured with specific moderation settings like PII labels, redaction,
toxicity thresholds, and prompt safety thresholds.
See more at https://aws.amazon.com/comprehend/
Amazon Comprehend service is used by several other classes:
- ComprehendToxicity class is used to check the toxicity of text prompts using
AWS Comprehend service and take actions based on the configuration
ComprehendPromptSafety class is used to validate the safety of given prompt
text, raising an error if unsafe content is detected based on the specified threshold
ComprehendPII class is designed to handle
Personally Identifiable Information (PII) moderation tasks,
detecting and managing PII entities in text inputs
Classes¶
comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain
Moderation Chain, based on Amazon Comprehend service.
comprehend_moderation.base_moderation.BaseModeration(client)
Base class for moderation.
comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler()
Base class for moderation callback handlers.
comprehend_moderation.base_moderation_config.BaseModerationConfig
Base configuration settings for moderation.
comprehend_moderation.base_moderation_config.ModerationPiiConfig
Configuration for PII moderation filter. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-4 | Configuration for PII moderation filter.
comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig
Configuration for Prompt Safety moderation filter.
comprehend_moderation.base_moderation_config.ModerationToxicityConfig
Configuration for Toxicity moderation filter.
comprehend_moderation.base_moderation_exceptions.ModerationPiiError([...])
Exception raised if PII entities are detected.
comprehend_moderation.base_moderation_exceptions.ModerationPromptSafetyError([...])
Exception raised if Unsafe prompts are detected.
comprehend_moderation.base_moderation_exceptions.ModerationToxicityError([...])
Exception raised if Toxic entities are detected.
comprehend_moderation.pii.ComprehendPII(client)
Class to handle Personally Identifiable Information (PII) moderation.
comprehend_moderation.prompt_safety.ComprehendPromptSafety(client)
Class to handle prompt safety moderation.
comprehend_moderation.toxicity.ComprehendToxicity(client)
Class to handle toxicity moderation.
langchain_experimental.cpal¶
Causal program-aided language (CPAL) is a concept implemented in LangChain as
a chain for causal modeling and narrative decomposition.
CPAL improves upon the program-aided language (PAL) by incorporating
causal structure to prevent hallucination in language models,
particularly when dealing with complex narratives and math
problems with nested dependencies.
CPAL involves translating causal narratives into a stack of operations,
setting hypothetical conditions for causal models, and decomposing
narratives into story elements.
It allows for the creation of causal chains that define the relationships
between different elements in a narrative, enabling the modeling and analysis
of causal relationships within a given context.
Classes¶
cpal.base.CPALChain | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-5 | of causal relationships within a given context.
Classes¶
cpal.base.CPALChain
Causal program-aided language (CPAL) chain implementation.
cpal.base.CausalChain
Translate the causal narrative into a stack of operations.
cpal.base.InterventionChain
Set the hypothetical conditions for the causal model.
cpal.base.NarrativeChain
Decompose the narrative into its story elements.
cpal.base.QueryChain
Query the outcome table using SQL.
cpal.constants.Constant(value)
Enum for constants used in the CPAL.
cpal.models.CausalModel
Casual data.
cpal.models.EntityModel
Entity in the story.
cpal.models.EntitySettingModel
Entity initial conditions.
cpal.models.InterventionModel
Intervention data of the story aka initial conditions.
cpal.models.NarrativeModel
Narrative input as three story elements.
cpal.models.QueryModel
Query data of the story.
cpal.models.ResultModel
Result of the story query.
cpal.models.StoryModel
Story data.
cpal.models.SystemSettingModel
System initial conditions.
langchain_experimental.data_anonymizer¶
Data anonymizer contains both Anonymizers and Deanonymizers.
It uses the [Microsoft Presidio](https://microsoft.github.io/presidio/) library.
Anonymizers are used to replace a Personally Identifiable Information (PII)
entity text with some other
value by applying a certain operator (e.g. replace, mask, redact, encrypt).
Deanonymizers are used to revert the anonymization operation
(e.g. to decrypt an encrypted text).
Classes¶
data_anonymizer.base.AnonymizerBase()
Base abstract class for anonymizers.
data_anonymizer.base.ReversibleAnonymizerBase()
Base abstract class for reversible anonymizers. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-6 | Base abstract class for reversible anonymizers.
data_anonymizer.deanonymizer_mapping.DeanonymizerMapping(...)
Deanonymizer mapping.
data_anonymizer.presidio.PresidioAnonymizer([...])
Anonymizer using Microsoft Presidio.
data_anonymizer.presidio.PresidioAnonymizerBase([...])
Base Anonymizer using Microsoft Presidio.
data_anonymizer.presidio.PresidioReversibleAnonymizer([...])
Reversible Anonymizer using Microsoft Presidio.
Functions¶
data_anonymizer.deanonymizer_mapping.create_anonymizer_mapping(...)
Create or update the mapping used to anonymize and/or
data_anonymizer.deanonymizer_mapping.format_duplicated_operator(...)
Format the operator name with the count.
data_anonymizer.deanonymizer_matching_strategies.case_insensitive_matching_strategy(...)
Case insensitive matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.combined_exact_fuzzy_matching_strategy(...)
Combined exact and fuzzy matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.exact_matching_strategy(...)
Exact matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.fuzzy_matching_strategy(...)
Fuzzy matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.ngram_fuzzy_matching_strategy(...)
N-gram fuzzy matching strategy for deanonymization.
data_anonymizer.faker_presidio_mapping.get_pseudoanonymizer_mapping([seed])
Get a mapping of entities to pseudo anonymize them.
langchain_experimental.fallacy_removal¶
Fallacy Removal Chain runs a self-review of logical fallacies
as determined by paper
[Robust and Explainable Identification of Logical Fallacies in Natural | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-7 | as determined by paper
[Robust and Explainable Identification of Logical Fallacies in Natural
Language Arguments](https://arxiv.org/pdf/2212.07425.pdf).
It is modeled after Constitutional AI and in the same format, but applying logical
fallacies as generalized rules to remove them in output.
Classes¶
fallacy_removal.base.FallacyChain
Chain for applying logical fallacy evaluations.
fallacy_removal.models.LogicalFallacy
Logical fallacy.
langchain_experimental.generative_agents¶
Generative Agent primitives.
Classes¶
generative_agents.generative_agent.GenerativeAgent
Agent as a character with memory and innate characteristics.
generative_agents.memory.GenerativeAgentMemory
Memory for the generative agent.
langchain_experimental.graph_transformers¶
Graph Transformers transform Documents into Graph Documents.
Classes¶
graph_transformers.diffbot.DiffbotGraphTransformer(...)
Transform documents into graph documents using Diffbot NLP API.
graph_transformers.diffbot.NodesList()
List of nodes with associated properties.
graph_transformers.diffbot.SimplifiedSchema()
Simplified schema mapping.
graph_transformers.diffbot.TypeOption(value)
An enumeration.
graph_transformers.gliner.GlinerGraphTransformer(...)
A transformer class for converting documents into graph structures using the GLiNER and GLiREL models.
graph_transformers.llm.LLMGraphTransformer(llm)
Transform documents into graph-based documents using a LLM.
graph_transformers.llm.UnstructuredRelation
Create a new model by parsing and validating input data from keyword arguments.
graph_transformers.relik.RelikGraphTransformer([...])
A transformer class for converting documents into graph structures using the Relik library and models.
Functions¶
graph_transformers.diffbot.format_property_key(s)
Formats a string to be used as a property key.
graph_transformers.llm.create_simple_model([...]) | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-8 | graph_transformers.llm.create_simple_model([...])
Create a simple graph model with optional constraints on node and relationship types.
graph_transformers.llm.create_unstructured_prompt([...])
graph_transformers.llm.format_property_key(s)
graph_transformers.llm.map_to_base_node(node)
Map the SimpleNode to the base Node.
graph_transformers.llm.map_to_base_relationship(rel)
Map the SimpleRelationship to the base Relationship.
graph_transformers.llm.optional_enum_field([...])
Utility function to conditionally create a field with an enum constraint.
langchain_experimental.llm_bash¶
LLM bash is a chain that uses LLM to interpret a prompt and
executes bash code.
Classes¶
llm_bash.base.LLMBashChain
Chain that interprets a prompt and executes bash operations.
llm_bash.bash.BashProcess([strip_newlines, ...])
Wrapper for starting subprocesses.
llm_bash.prompt.BashOutputParser
Parser for bash output.
langchain_experimental.llm_symbolic_math¶
Chain that interprets a prompt and executes python code to do math.
Heavily borrowed from llm_math, uses the [SymPy](https://www.sympy.org/) package.
Classes¶
llm_symbolic_math.base.LLMSymbolicMathChain
Chain that interprets a prompt and executes python code to do symbolic math.
langchain_experimental.llms¶
Experimental LLM classes provide
access to the large language model (LLM) APIs and services.
Classes¶
llms.anthropic_functions.TagParser()
Parser for the tool tags.
llms.jsonformer_decoder.JsonFormer
Jsonformer wrapped LLM using HuggingFace Pipeline API.
llms.llamaapi.ChatLlamaAPI
Chat model using the Llama API. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-9 | llms.llamaapi.ChatLlamaAPI
Chat model using the Llama API.
llms.lmformatenforcer_decoder.LMFormatEnforcer
LMFormatEnforcer wrapped LLM using HuggingFace Pipeline API.
llms.rellm_decoder.RELLM
RELLM wrapped LLM using HuggingFace Pipeline API.
Functions¶
llms.jsonformer_decoder.import_jsonformer()
Lazily import of the jsonformer package.
llms.lmformatenforcer_decoder.import_lmformatenforcer()
Lazily import of the lmformatenforcer package.
llms.ollama_functions.convert_to_ollama_tool(tool)
Convert a tool to an Ollama tool.
llms.ollama_functions.parse_response(message)
Extract function_call from AIMessage.
llms.rellm_decoder.import_rellm()
Lazily import of the rellm package.
Deprecated classes¶
llms.anthropic_functions.AnthropicFunctions
Deprecated since version 0.0.54: Use langchain_anthropic.experimental.ChatAnthropicTools instead.
llms.ollama_functions.OllamaFunctions
Deprecated since version 0.0.64: Use langchain_ollama.ChatOllama instead.
langchain_experimental.open_clip¶
OpenCLIP Embeddings model.
OpenCLIP is a multimodal model that can encode text and images into a shared space.
See this paper for more details: https://arxiv.org/abs/2103.00020
and [this repository](https://github.com/mlfoundations/open_clip) for details.
Classes¶
open_clip.open_clip.OpenCLIPEmbeddings
OpenCLIP Embeddings model.
langchain_experimental.pal_chain¶
PAL Chain implements Program-Aided Language Models. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-10 | langchain_experimental.pal_chain¶
PAL Chain implements Program-Aided Language Models.
See the paper: https://arxiv.org/pdf/2211.10435.pdf.
This chain is vulnerable to [arbitrary code execution](https://github.com/langchain-ai/langchain/issues/5872).
Classes¶
pal_chain.base.PALChain
Chain that implements Program-Aided Language Models (PAL).
pal_chain.base.PALValidation([...])
Validation for PAL generated code.
langchain_experimental.plan_and_execute¶
Plan-and-execute agents are planning tasks with a language model (LLM) and
executing them with a separate agent.
Classes¶
plan_and_execute.agent_executor.PlanAndExecute
Plan and execute a chain of steps.
plan_and_execute.executors.base.BaseExecutor
Base executor.
plan_and_execute.executors.base.ChainExecutor
Chain executor.
plan_and_execute.planners.base.BasePlanner
Base planner.
plan_and_execute.planners.base.LLMPlanner
LLM planner.
plan_and_execute.planners.chat_planner.PlanningOutputParser
Planning output parser.
plan_and_execute.schema.BaseStepContainer
Base step container.
plan_and_execute.schema.ListStepContainer
Container for List of steps.
plan_and_execute.schema.Plan
Plan.
plan_and_execute.schema.PlanOutputParser
Plan output parser.
plan_and_execute.schema.Step
Step.
plan_and_execute.schema.StepResponse
Step response.
Functions¶
plan_and_execute.executors.agent_executor.load_agent_executor(...)
Load an agent executor.
plan_and_execute.planners.chat_planner.load_chat_planner(llm)
Load a chat planner.
langchain_experimental.prompt_injection_identifier¶
HuggingFace Injection Identifier is a tool that uses
[HuggingFace Prompt Injection model](https://huggingface.co/deepset/deberta-v3-base-injection) | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-11 | to detect prompt injection attacks.
Classes¶
prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier
Tool that uses HuggingFace Prompt Injection model to detect prompt injection attacks.
prompt_injection_identifier.hugging_face_identifier.PromptInjectionException([...])
Exception raised when prompt injection attack is detected.
langchain_experimental.recommenders¶
Amazon Personalize primitives.
[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html)
is a fully managed machine learning service that uses your data to generate
item recommendations for your users.
Classes¶
recommenders.amazon_personalize.AmazonPersonalize([...])
Amazon Personalize Runtime wrapper for executing real-time operations.
recommenders.amazon_personalize_chain.AmazonPersonalizeChain
Chain for retrieving recommendations from Amazon Personalize,
langchain_experimental.retrievers¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to
store documents, only to return (or retrieve) it.
Classes¶
retrievers.vector_sql_database.VectorSQLDatabaseChainRetriever
Retriever that uses Vector SQL Database.
langchain_experimental.rl_chain¶
RL (Reinforcement Learning) Chain leverages the Vowpal Wabbit (VW) models
for reinforcement learning with a context, with the goal of modifying
the prompt before the LLM call.
[Vowpal Wabbit](https://vowpalwabbit.org/) provides fast, efficient,
and flexible online machine learning techniques for reinforcement learning,
supervised learning, and more.
Classes¶
rl_chain.base.AutoSelectionScorer
Auto selection scorer.
rl_chain.base.Embedder(*args, **kwargs)
Abstract class to represent an embedder. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-12 | Abstract class to represent an embedder.
rl_chain.base.Event(inputs[, selected])
Abstract class to represent an event.
rl_chain.base.Policy(**kwargs)
Abstract class to represent a policy.
rl_chain.base.RLChain
Chain that leverages the Vowpal Wabbit (VW) model as a learned policy for reinforcement learning.
rl_chain.base.Selected()
Abstract class to represent the selected item.
rl_chain.base.SelectionScorer
Abstract class to grade the chosen selection or the response of the llm.
rl_chain.base.VwPolicy(model_repo, vw_cmd, ...)
Vowpal Wabbit policy.
rl_chain.metrics.MetricsTrackerAverage(step)
Metrics Tracker Average.
rl_chain.metrics.MetricsTrackerRollingWindow(...)
Metrics Tracker Rolling Window.
rl_chain.model_repository.ModelRepository(folder)
Model Repository.
rl_chain.pick_best_chain.PickBest
Chain that leverages the Vowpal Wabbit (VW) model for reinforcement learning with a context, with the goal of modifying the prompt before the LLM call.
rl_chain.pick_best_chain.PickBestEvent(...)
Event class for PickBest chain.
rl_chain.pick_best_chain.PickBestFeatureEmbedder(...)
Embed the BasedOn and ToSelectFrom inputs into a format that can be used by the learning policy.
rl_chain.pick_best_chain.PickBestRandomPolicy(...)
Random policy for PickBest chain.
rl_chain.pick_best_chain.PickBestSelected([...])
Selected class for PickBest chain.
rl_chain.vw_logger.VwLogger(path)
Vowpal Wabbit custom logger.
Functions¶
rl_chain.base.BasedOn(anything)
Wrap a value to indicate that it should be based on.
rl_chain.base.Embed(anything[, keep])
Wrap a value to indicate that it should be embedded.
rl_chain.base.EmbedAndKeep(anything) | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-13 | rl_chain.base.EmbedAndKeep(anything)
Wrap a value to indicate that it should be embedded and kept.
rl_chain.base.ToSelectFrom(anything)
Wrap a value to indicate that it should be selected from.
rl_chain.base.get_based_on_and_to_select_from(inputs)
Get the BasedOn and ToSelectFrom from the inputs.
rl_chain.base.parse_lines(parser, input_str)
Parse the input string into a list of examples.
rl_chain.base.prepare_inputs_for_autoembed(inputs)
Prepare the inputs for auto embedding.
rl_chain.helpers.embed(to_embed, model[, ...])
Embed the actions or context using the SentenceTransformer model (or a model that has an encode function).
rl_chain.helpers.embed_dict_type(item, model)
Embed a dictionary item.
rl_chain.helpers.embed_list_type(item, model)
Embed a list item.
rl_chain.helpers.embed_string_type(item, model)
Embed a string or an _Embed object.
rl_chain.helpers.is_stringtype_instance(item)
Check if an item is a string.
rl_chain.helpers.stringify_embedding(embedding)
Convert an embedding to a string.
langchain_experimental.smart_llm¶
SmartGPT chain is applying self-critique using the SmartGPT workflow.
See details at https://youtu.be/wVzuvf9D9BU
The workflow performs these 3 steps:
1. Ideate: Pass the user prompt to an Ideation LLM n_ideas times,
each result is an “idea”
Critique: Pass the ideas to a Critique LLM which looks for flaws in the ideas
& picks the best one
Resolve: Pass the critique to a Resolver LLM which improves upon the best idea
& outputs only the (improved version of) the best output | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-14 | & outputs only the (improved version of) the best output
In total, the SmartGPT workflow will use n_ideas+2 LLM calls
Note that SmartLLMChain will only improve results (compared to a basic LLMChain),
when the underlying models have the capability for reflection, which smaller models
often don’t.
Finally, a SmartLLMChain assumes that each underlying LLM outputs exactly 1 result.
Classes¶
smart_llm.base.SmartLLMChain
Chain for applying self-critique using the SmartGPT workflow.
langchain_experimental.sql¶
SQL Chain interacts with SQL Database.
Classes¶
sql.base.SQLDatabaseChain
Chain for interacting with SQL Database.
sql.base.SQLDatabaseSequentialChain
Chain for querying SQL database that is a sequential chain.
sql.vector_sql.VectorSQLDatabaseChain
Chain for interacting with Vector SQL Database.
sql.vector_sql.VectorSQLOutputParser
Output Parser for Vector SQL.
sql.vector_sql.VectorSQLRetrieveAllOutputParser
Parser based on VectorSQLOutputParser.
Functions¶
sql.vector_sql.get_result_from_sqldb(db, cmd)
Get result from SQL Database.
langchain_experimental.tabular_synthetic_data¶
Generate tabular synthetic data using LLM and few-shot template.
Classes¶
tabular_synthetic_data.base.SyntheticDataGenerator
Generate synthetic data using the given LLM and few-shot template.
Functions¶
tabular_synthetic_data.openai.create_openai_data_generator(...)
Create an instance of SyntheticDataGenerator tailored for OpenAI models.
langchain_experimental.text_splitter¶
Experimental text splitter based on semantic similarity.
Classes¶
text_splitter.SemanticChunker(embeddings[, ...])
Split the text based on semantic similarity.
Functions¶
text_splitter.calculate_cosine_distances(...)
Calculate cosine distances between sentences. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-15 | Functions¶
text_splitter.calculate_cosine_distances(...)
Calculate cosine distances between sentences.
text_splitter.combine_sentences(sentences[, ...])
Combine sentences based on buffer size.
langchain_experimental.tools¶
Experimental Python REPL tools.
Classes¶
tools.python.tool.PythonAstREPLTool
Tool for running python code in a REPL.
tools.python.tool.PythonInputs
Python inputs.
tools.python.tool.PythonREPLTool
Tool for running python code in a REPL.
Functions¶
tools.python.tool.sanitize_input(query)
Sanitize input to the python REPL.
langchain_experimental.tot¶
Implementation of a Tree of Thought (ToT) chain based on the paper
[Large Language Model Guided Tree-of-Thought](https://arxiv.org/pdf/2305.08291.pdf).
The Tree of Thought (ToT) chain uses a tree structure to explore the space of
possible solutions to a problem.
Classes¶
tot.base.ToTChain
Chain implementing the Tree of Thought (ToT).
tot.checker.ToTChecker
Tree of Thought (ToT) checker.
tot.controller.ToTController([c])
Tree of Thought (ToT) controller.
tot.memory.ToTDFSMemory([stack])
Memory for the Tree of Thought (ToT) chain.
tot.prompts.CheckerOutputParser
Parse and check the output of the language model.
tot.prompts.JSONListOutputParser
Parse the output of a PROPOSE_PROMPT response.
tot.thought.Thought
A thought in the ToT.
tot.thought.ThoughtValidity(value)
Enum for the validity of a thought.
tot.thought_generation.BaseThoughtGenerationStrategy
Base class for a thought generation strategy.
tot.thought_generation.ProposePromptStrategy
Strategy that is sequentially using a "propose prompt".
tot.thought_generation.SampleCoTStrategy | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
3d43f3293ea4-16 | tot.thought_generation.SampleCoTStrategy
Sample strategy from a Chain-of-Thought (CoT) prompt.
Functions¶
tot.prompts.get_cot_prompt()
Get the prompt for the Chain of Thought (CoT) chain.
tot.prompts.get_propose_prompt()
Get the prompt for the PROPOSE_PROMPT chain.
langchain_experimental.utilities¶
Utility that simulates a standalone Python REPL.
Classes¶
utilities.python.PythonREPL
Simulates a standalone Python REPL.
langchain_experimental.video_captioning¶
Classes¶
video_captioning.base.VideoCaptioningChain
Video Captioning Chain.
video_captioning.models.AudioModel(...)
video_captioning.models.BaseModel(...)
video_captioning.models.CaptionModel(...)
video_captioning.models.VideoModel(...)
video_captioning.services.audio_service.AudioProcessor(api_key)
video_captioning.services.caption_service.CaptionProcessor(llm)
video_captioning.services.combine_service.CombineProcessor(llm)
video_captioning.services.image_service.ImageProcessor([...])
video_captioning.services.srt_service.SRTProcessor() | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
2e2873670d07-0 | langchain_anthropic 0.1.23¶
langchain_anthropic.chat_models¶
Classes¶
chat_models.AnthropicTool
Anthropic tool definition.
chat_models.ChatAnthropic
Anthropic chat models.
Functions¶
chat_models.convert_to_anthropic_tool(tool)
Convert a tool-like object to an Anthropic tool definition.
Deprecated classes¶
chat_models.ChatAnthropicMessages
Deprecated since version 0.1.0: Use ChatAnthropic instead.
langchain_anthropic.experimental¶
Functions¶
experimental.get_system_message(tools)
Generate a system message that describes the available tools.
Deprecated classes¶
experimental.ChatAnthropicTools
Deprecated since version 0.1.5: Tool-calling is now officially supported by the Anthropic API so this workaround is no longer needed. Use ChatAnthropic instead.
langchain_anthropic.llms¶
Classes¶
llms.AnthropicLLM
Anthropic large language model.
Deprecated classes¶
llms.Anthropic
Deprecated since version 0.1.0: Use AnthropicLLM instead.
langchain_anthropic.output_parsers¶
Classes¶
output_parsers.ToolsOutputParser
Output parser for tool calls.
Functions¶
output_parsers.extract_tool_calls(content)
Extract tool calls from a list of content blocks. | https://api.python.langchain.com/en/latest/anthropic_api_reference.html |
f45e4c1e7dc1-0 | langchain_box 0.1.0¶
langchain_box.document_loaders¶
Box Document Loaders.
Classes¶
document_loaders.box.BoxLoader
BoxLoader.
langchain_box.retrievers¶
Box Document Loaders.
Classes¶
retrievers.box.BoxRetriever
Box retriever.
langchain_box.utilities¶
Box API Utilities.
Classes¶
utilities.box.BoxAuth
BoxAuth.
utilities.box.BoxAuthType(value)
BoxAuthType(Enum).
utilities.box.DocumentFiles(value)
DocumentFiles(Enum).
utilities.box.ImageFiles(value)
ImageFiles(Enum). | https://api.python.langchain.com/en/latest/box_api_reference.html |
39baebaa29aa-0 | langchain_chroma 0.1.3¶
langchain_chroma.vectorstores¶
This is the langchain_chroma.vectorstores module.
It contains the Chroma class which is a vector store for handling various tasks.
Classes¶
vectorstores.Chroma([collection_name, ...])
Chroma vector store integration.
Functions¶
vectorstores.cosine_similarity(X, Y)
Row-wise cosine similarity between two equal-width matrices.
vectorstores.maximal_marginal_relevance(...)
Calculate maximal marginal relevance. | https://api.python.langchain.com/en/latest/chroma_api_reference.html |
09252438e133-0 | langchain_couchbase 0.1.1¶
langchain_couchbase.cache¶
LangChain Couchbase Caches
Functions “_hash”, “_loads_generations” and “_dumps_generations”
are duplicated in this utility from modules:
“libs/community/langchain_community/cache.py”
Classes¶
cache.CouchbaseCache(cluster, bucket_name, ...)
Couchbase LLM Cache LLM Cache that uses Couchbase as the backend
cache.CouchbaseSemanticCache(cluster, ...[, ...])
Couchbase Semantic Cache Cache backed by a Couchbase Server with Vector Store support
langchain_couchbase.chat_message_histories¶
Couchbase Chat Message History
Classes¶
chat_message_histories.CouchbaseChatMessageHistory(*, ...)
Couchbase Chat Message History Chat message history that uses Couchbase as the storage
langchain_couchbase.vectorstores¶
Couchbase vector stores.
Classes¶
vectorstores.CouchbaseVectorStore(cluster, ...)
__ModuleName__ vector store integration. | https://api.python.langchain.com/en/latest/couchbase_api_reference.html |
a8d04d638765-0 | langchain_nvidia_ai_endpoints 0.2.2¶
langchain_nvidia_ai_endpoints.callbacks¶
Callback Handler that prints to std out.
Classes¶
callbacks.UsageCallbackHandler()
Callback Handler that tracks OpenAI info.
Functions¶
callbacks.get_token_cost_for_model(...[, ...])
Get the cost in USD for a given model and number of tokens.
callbacks.get_usage_callback([price_map, ...])
Get the OpenAI callback handler in a context manager.
callbacks.standardize_model_name(model_name)
Standardize the model name to a format that can be used in the OpenAI API.
langchain_nvidia_ai_endpoints.chat_models¶
Chat Model Components Derived from ChatModel/NVIDIA
Classes¶
chat_models.ChatNVIDIA
NVIDIA chat model.
langchain_nvidia_ai_endpoints.embeddings¶
Embeddings Components Derived from NVEModel/Embeddings
Classes¶
embeddings.NVIDIAEmbeddings
Client to NVIDIA embeddings models.
langchain_nvidia_ai_endpoints.llm¶
Classes¶
llm.NVIDIA
LangChain LLM that uses the Completions API with NVIDIA NIMs.
langchain_nvidia_ai_endpoints.reranking¶
Classes¶
reranking.NVIDIARerank
LangChain Document Compressor that uses the NVIDIA NeMo Retriever Reranking API.
reranking.Ranking
Create a new model by parsing and validating input data from keyword arguments.
langchain_nvidia_ai_endpoints.tools¶
OpenAI chat wrapper.
Classes¶
tools.ServerToolsMixin() | https://api.python.langchain.com/en/latest/nvidia_ai_endpoints_api_reference.html |
4846adf8500a-0 | langchain_airbyte 0.1.1¶ | https://api.python.langchain.com/en/latest/airbyte_api_reference.html |
2af8d9da54f5-0 | langchain_groq 0.1.9¶
langchain_groq.chat_models¶
Groq Chat wrapper.
Classes¶
chat_models.ChatGroq
Groq Chat large language models API. | https://api.python.langchain.com/en/latest/groq_api_reference.html |
91bc87198f1e-0 | langchain_cohere 0.2.4¶
langchain_cohere.chains¶
Functions¶
chains.summarize.summarize_chain.create_summarize_prompt([...])
Create prompt for this agent. :param system_message: Message to use as the system message that will be the first in the prompt. :param extra_prompt_messages: Prompt messages that will be placed between the system message and the new human input.
chains.summarize.summarize_chain.load_summarize_chain(llm)
langchain_cohere.chat_models¶
Classes¶
chat_models.ChatCohere
Implements the BaseChatModel (and BaseLanguageModel) interface with Cohere's large language models.
Functions¶
chat_models.get_cohere_chat_request(messages, *)
Get the request for the Cohere chat API.
chat_models.get_role(message)
Get the role of the message.
langchain_cohere.common¶
Classes¶
common.CohereCitation(start, end, text, ...)
Cohere has fine-grained citations that specify the exact part of text.
langchain_cohere.csv_agent¶
Functions¶
csv_agent.agent.count_words_in_file(file_path)
csv_agent.agent.create_csv_agent(llm, path)
Create csv agent with the specified language model.
csv_agent.agent.create_prompt([...])
Create prompt for this agent.
csv_agent.tools.get_file_peek_tool()
csv_agent.tools.get_file_read_tool()
csv_agent.tools.get_python_tool()
Returns a tool that will execute python code and return the output.
langchain_cohere.embeddings¶
Classes¶
embeddings.CohereEmbeddings
Implements the Embeddings interface with Cohere's text representation language models.
langchain_cohere.llms¶
Classes¶
llms.BaseCohere | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
91bc87198f1e-1 | langchain_cohere.llms¶
Classes¶
llms.BaseCohere
Base class for Cohere models.
llms.Cohere
Cohere large language models.
Functions¶
llms.acompletion_with_retry(llm, **kwargs)
Use tenacity to retry the completion call.
llms.completion_with_retry(llm, **kwargs)
Use tenacity to retry the completion call.
llms.enforce_stop_tokens(text, stop)
Cut off the text as soon as any stop words occur.
langchain_cohere.rag_retrievers¶
Classes¶
rag_retrievers.CohereRagRetriever
Cohere Chat API with RAG.
langchain_cohere.react_multi_hop¶
Classes¶
react_multi_hop.parsing.CohereToolsReactAgentOutputParser
Parses a message into agent actions/finish.
Functions¶
react_multi_hop.agent.create_cohere_react_agent(...)
Create an agent that enables multiple tools to be used in sequence to complete a task.
react_multi_hop.parsing.parse_actions(generation)
Parse action selections from model output.
react_multi_hop.parsing.parse_answer_with_prefixes(...)
parses string into key-value pairs,
react_multi_hop.parsing.parse_citations(...)
Parses a grounded_generation (from parse_actions) and documents (from convert_to_documents) into a (generation, CohereCitation list) tuple.
react_multi_hop.parsing.parse_jsonified_tool_use_generation(...)
Parses model-generated jsonified actions.
react_multi_hop.prompt.convert_to_documents(...)
Converts observations into a 'document' dict
react_multi_hop.prompt.create_directly_answer_tool()
directly_answer is a special tool that's always presented to the model as an available tool.
react_multi_hop.prompt.multi_hop_prompt(...) | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
91bc87198f1e-2 | react_multi_hop.prompt.multi_hop_prompt(...)
The returned function produces a BasePromptTemplate suitable for multi-hop.
react_multi_hop.prompt.render_intermediate_steps(...)
Renders an agent's intermediate steps into prompt content.
react_multi_hop.prompt.render_messages(messages)
Renders one or more BaseMessage implementations into prompt content.
react_multi_hop.prompt.render_observations(...)
Renders the 'output' part of an Agent's intermediate step into prompt content.
react_multi_hop.prompt.render_role(message)
Renders the role of a message into prompt content.
react_multi_hop.prompt.render_structured_preamble([...])
Renders the structured preamble part of the prompt content.
react_multi_hop.prompt.render_tool([tool, ...])
Renders a tool into prompt content. Either a BaseTool instance, or, a JSON
langchain_cohere.rerank¶
Classes¶
rerank.CohereRerank
Document compressor that uses Cohere Rerank API.
langchain_cohere.sql_agent¶
Functions¶
sql_agent.agent.create_sql_agent(llm[, ...])
Construct a SQL agent from an LLM and toolkit or database. | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
dd2bbe566d8e-0 | langchain_prompty 0.0.3¶
langchain_prompty.core¶
Classes¶
core.Frontmatter()
Class for reading frontmatter from a string or file.
core.Invoker(prompty)
Base class for all invokers.
core.InvokerFactory()
Factory for creating invokers.
core.ModelSettings
Model settings for a prompty model.
core.NoOpParser(prompty)
NoOp parser for invokers.
core.Prompty
Base Prompty model.
core.PropertySettings
Property settings for a prompty model.
core.SimpleModel
Simple model for a single item.
core.TemplateSettings
Template settings for a prompty model.
Functions¶
core.param_hoisting(top, bottom[, top_key])
Merge two dictionaries with hoisting of parameters from bottom to top.
langchain_prompty.langchain¶
Functions¶
langchain.create_chat_prompt(path[, ...])
Create a chat prompt from a Langchain schema.
langchain_prompty.parsers¶
Classes¶
parsers.PromptyChatParser(prompty)
Parse a chat prompt into a list of messages.
parsers.RoleMap()
langchain_prompty.renderers¶
Classes¶
renderers.MustacheRenderer(prompty)
Render a mustache template.
langchain_prompty.utils¶
Functions¶
utils.execute(prompt[, configuration, ...])
Execute a prompty.
utils.load(prompt_path[, configuration])
Load a prompty file and return a Prompty object.
utils.prepare(prompt[, inputs])
Prepare the inputs for the prompty.
utils.run(prompt, content[, configuration, ...])
Run the prompty. | https://api.python.langchain.com/en/latest/prompty_api_reference.html |
b95061fe2dd1-0 | langchain_elasticsearch 0.2.2¶
langchain_elasticsearch.cache¶
Classes¶
cache.ElasticsearchCache(index_name[, ...])
An Elasticsearch cache integration for LLMs.
cache.ElasticsearchEmbeddingsCache(index_name)
An Elasticsearch store for caching embeddings.
langchain_elasticsearch.chat_history¶
Classes¶
chat_history.ElasticsearchChatMessageHistory(...)
Chat message history that stores history in Elasticsearch.
langchain_elasticsearch.client¶
Functions¶
client.create_elasticsearch_client([url, ...])
langchain_elasticsearch.embeddings¶
Classes¶
embeddings.ElasticsearchEmbeddings(client, ...)
Elasticsearch embedding models.
embeddings.EmbeddingServiceAdapter(...)
Adapter for LangChain Embeddings to support the EmbeddingService interface from elasticsearch.helpers.vectorstore.
langchain_elasticsearch.retrievers¶
Classes¶
retrievers.ElasticsearchRetriever
Elasticsearch retriever
langchain_elasticsearch.vectorstores¶
Classes¶
vectorstores.ElasticsearchStore(index_name, ...)
Elasticsearch vector store.
Deprecated classes¶
vectorstores.ApproxRetrievalStrategy([...])
Deprecated since version 0.2.0: Use DenseVectorStrategy instead.
vectorstores.BM25RetrievalStrategy([k1, b])
Deprecated since version 0.2.0: Use BM25Strategy instead.
vectorstores.BaseRetrievalStrategy(*args, ...)
Deprecated since version 0.2.0: Use RetrievalStrategy instead.
vectorstores.ExactRetrievalStrategy(*args, ...)
Deprecated since version 0.2.0: Use DenseVectorScriptScoreStrategy instead.
vectorstores.SparseRetrievalStrategy([model_id])
Deprecated since version 0.2.0: Use SparseVectorStrategy instead. | https://api.python.langchain.com/en/latest/elasticsearch_api_reference.html |
ed7c4af66c5c-0 | langchain_fireworks 0.1.7¶
langchain_fireworks.chat_models¶
Fireworks chat wrapper.
Classes¶
chat_models.ChatFireworks
Fireworks Chat large language models API.
langchain_fireworks.embeddings¶
Classes¶
embeddings.FireworksEmbeddings
Fireworks embedding model integration.
langchain_fireworks.llms¶
Wrapper around Fireworks AI’s Completion API.
Classes¶
llms.Fireworks
LLM models from Fireworks. | https://api.python.langchain.com/en/latest/fireworks_api_reference.html |
1e0d556bdf53-0 | langchain_google_community 1.0.8¶
langchain_google_community.bigquery¶
Classes¶
bigquery.BigQueryLoader(query[, project, ...])
Load from the Google Cloud Platform BigQuery.
langchain_google_community.bq_storage_vectorstores¶
Classes¶
bq_storage_vectorstores.bigquery.BigQueryVectorStore
A vector store implementation that utilizes BigQuery and BigQuery Vector Search.
bq_storage_vectorstores.featurestore.VertexFSVectorStore
A vector store implementation that utilizes BigQuery Storage and Vertex AI Feature Store.
Functions¶
bq_storage_vectorstores.utils.cast_proto_type(...)
bq_storage_vectorstores.utils.check_bq_dataset_exists(...)
bq_storage_vectorstores.utils.doc_match_filter(...)
bq_storage_vectorstores.utils.validate_column_in_bq_schema(...)
Validates a column within a BigQuery schema.
langchain_google_community.docai¶
Module contains a PDF parser based on Document AI from Google Cloud.
You need to install two libraries to use this parser:
pip install google-cloud-documentai
pip install google-cloud-documentai-toolbox
Classes¶
docai.DocAIParser(*[, client, project_id, ...])
Google Cloud Document AI parser.
docai.DocAIParsingResults(source_path, ...)
A dataclass to store Document AI parsing results.
langchain_google_community.documentai_warehouse¶
Retriever wrapper for Google Cloud Document AI Warehouse.
Classes¶
documentai_warehouse.DocumentAIWarehouseRetriever
A retriever based on Document AI Warehouse.
langchain_google_community.drive¶
Classes¶
drive.GoogleDriveLoader
Load Google Docs from Google Drive.
langchain_google_community.gcs_directory¶
Classes¶
gcs_directory.GCSDirectoryLoader(...[, ...])
Load from GCS directory.
langchain_google_community.gcs_file¶
Classes¶ | https://api.python.langchain.com/en/latest/google_community_api_reference.html |
1e0d556bdf53-1 | Load from GCS directory.
langchain_google_community.gcs_file¶
Classes¶
gcs_file.GCSFileLoader(project_name, bucket, ...)
Load from GCS file.
langchain_google_community.gmail¶
Classes¶
gmail.base.GmailBaseTool
Base class for Gmail tools.
gmail.create_draft.CreateDraftSchema
Input for CreateDraftTool.
gmail.create_draft.GmailCreateDraft
Tool that creates a draft email for Gmail.
gmail.get_message.GmailGetMessage
Tool that gets a message by ID from Gmail.
gmail.get_message.SearchArgsSchema
Input for GetMessageTool.
gmail.get_thread.GetThreadSchema
Input for GetMessageTool.
gmail.get_thread.GmailGetThread
Tool that gets a thread by ID from Gmail.
gmail.loader.GMailLoader(creds[, n, raise_error])
Load data from GMail.
gmail.search.GmailSearch
Tool that searches for messages or threads in Gmail.
gmail.search.Resource(value)
Enumerator of Resources to search.
gmail.search.SearchArgsSchema
Input for SearchGmailTool.
gmail.send_message.GmailSendMessage
Tool that sends a message to Gmail.
gmail.send_message.SendMessageSchema
Input for SendMessageTool.
gmail.toolkit.GmailToolkit
Toolkit for interacting with Gmail.
Functions¶
gmail.utils.build_resource_service([...])
Build a Gmail service.
gmail.utils.clean_email_body(body)
Clean email body.
gmail.utils.get_gmail_credentials([...])
Get credentials.
gmail.utils.import_google()
Import google libraries.
gmail.utils.import_googleapiclient_resource_builder()
Import googleapiclient.discovery.build function.
gmail.utils.import_installed_app_flow()
Import InstalledAppFlow class.
langchain_google_community.google_speech_to_text¶
Classes¶
google_speech_to_text.SpeechToTextLoader(...)
Loader for Google Cloud Speech-to-Text audio transcripts. | https://api.python.langchain.com/en/latest/google_community_api_reference.html |
1e0d556bdf53-2 | Loader for Google Cloud Speech-to-Text audio transcripts.
langchain_google_community.places_api¶
Chain that calls Google Places API.
Classes¶
places_api.GooglePlacesAPIWrapper
Wrapper around Google Places API.
places_api.GooglePlacesSchema
Input for GooglePlacesTool.
places_api.GooglePlacesTool
Tool that queries the Google places API.
langchain_google_community.search¶
Util that calls Google Search.
Classes¶
search.GoogleSearchAPIWrapper
Wrapper for Google Search API.
search.GoogleSearchResults
Tool that queries the Google Search API and gets back json.
search.GoogleSearchRun
Tool that queries the Google search API.
langchain_google_community.texttospeech¶
Classes¶
texttospeech.TextToSpeechTool
Tool that queries the Google Cloud Text to Speech API.
langchain_google_community.translate¶
Classes¶
translate.GoogleTranslateTransformer(...[, ...])
Translate text documents using Google Cloud Translation.
langchain_google_community.vertex_ai_search¶
Retriever wrapper for Google Vertex AI Search.
Set the following environment variables before the tests:
export PROJECT_ID=… - set to your Google Cloud project ID
export DATA_STORE_ID=… - the ID of the search engine to use for the test
Classes¶
vertex_ai_search.VertexAIMultiTurnSearchRetriever
Google Vertex AI Search retriever for multi-turn conversations.
vertex_ai_search.VertexAISearchRetriever
Google Vertex AI Search retriever.
vertex_ai_search.VertexAISearchSummaryTool
Class that exposes a tool to interface with an App in Vertex Search and Conversation and get the summary of the documents retrieved.
langchain_google_community.vertex_check_grounding¶
Classes¶
vertex_check_grounding.VertexAICheckGroundingWrapper
Initializes the Vertex AI CheckGroundingOutputParser with configurable parameters.
langchain_google_community.vertex_rank¶
Classes¶
vertex_rank.VertexAIRank | https://api.python.langchain.com/en/latest/google_community_api_reference.html |
1e0d556bdf53-3 | langchain_google_community.vertex_rank¶
Classes¶
vertex_rank.VertexAIRank
Initializes the Vertex AI Ranker with configurable parameters.
langchain_google_community.vision¶
Classes¶
vision.CloudVisionLoader(file_path[, project])
vision.CloudVisionParser([project]) | https://api.python.langchain.com/en/latest/google_community_api_reference.html |
9964624150b9-0 | langchain_mistralai 0.1.13¶
langchain_mistralai.chat_models¶
Classes¶
chat_models.ChatMistralAI
A chat model that uses the MistralAI API.
Functions¶
chat_models.acompletion_with_retry(llm[, ...])
Use tenacity to retry the async completion call.
langchain_mistralai.embeddings¶
Classes¶
embeddings.DummyTokenizer()
Dummy tokenizer for when tokenizer cannot be accessed (e.g., via Huggingface)
embeddings.MistralAIEmbeddings
MistralAI embedding model integration. | https://api.python.langchain.com/en/latest/mistralai_api_reference.html |
e38312a95140-0 | langchain_text_splitters 0.2.4¶
langchain_text_splitters.base¶
Classes¶
base.Language(value)
Enum of the programming languages.
base.TextSplitter(chunk_size, chunk_overlap, ...)
Interface for splitting text into chunks.
base.TokenTextSplitter([encoding_name, ...])
Splitting text to tokens using model tokenizer.
base.Tokenizer(chunk_overlap, ...)
Tokenizer data class.
Functions¶
base.split_text_on_tokens(*, text, tokenizer)
Split incoming text and return chunks using tokenizer.
langchain_text_splitters.character¶
Classes¶
character.CharacterTextSplitter([separator, ...])
Splitting text that looks at characters.
character.RecursiveCharacterTextSplitter([...])
Splitting text by recursively look at characters.
langchain_text_splitters.html¶
Classes¶
html.ElementType
Element type as typed dict.
html.HTMLHeaderTextSplitter(headers_to_split_on)
Splitting HTML files based on specified headers.
html.HTMLSectionSplitter(headers_to_split_on)
Splitting HTML files based on specified tag and font sizes.
langchain_text_splitters.json¶
Classes¶
json.RecursiveJsonSplitter([max_chunk_size, ...])
langchain_text_splitters.konlpy¶
Classes¶
konlpy.KonlpyTextSplitter([separator])
Splitting text using Konlpy package.
langchain_text_splitters.latex¶
Classes¶
latex.LatexTextSplitter(**kwargs)
Attempts to split the text along Latex-formatted layout elements.
langchain_text_splitters.markdown¶
Classes¶
markdown.ExperimentalMarkdownSyntaxTextSplitter([...])
An experimental text splitter for handling Markdown syntax.
markdown.HeaderType
Header type as typed dict.
markdown.LineType | https://api.python.langchain.com/en/latest/text_splitters_api_reference.html |
e38312a95140-1 | markdown.HeaderType
Header type as typed dict.
markdown.LineType
Line type as typed dict.
markdown.MarkdownHeaderTextSplitter(...[, ...])
Splitting markdown files based on specified headers.
markdown.MarkdownTextSplitter(**kwargs)
Attempts to split the text along Markdown-formatted headings.
langchain_text_splitters.nltk¶
Classes¶
nltk.NLTKTextSplitter([separator, language])
Splitting text using NLTK package.
langchain_text_splitters.python¶
Classes¶
python.PythonCodeTextSplitter(**kwargs)
Attempts to split the text along Python syntax.
langchain_text_splitters.sentence_transformers¶
Classes¶
sentence_transformers.SentenceTransformersTokenTextSplitter([...])
Splitting text to tokens using sentence model tokenizer.
langchain_text_splitters.spacy¶
Classes¶
spacy.SpacyTextSplitter([separator, ...])
Splitting text using Spacy package. | https://api.python.langchain.com/en/latest/text_splitters_api_reference.html |
49a3839e3694-0 | langchain_openai 0.1.23¶
langchain_openai.chat_models¶
Classes¶
chat_models.azure.AzureChatOpenAI
Azure OpenAI chat model integration.
chat_models.base.BaseChatOpenAI
Fields
chat_models.base.ChatOpenAI
OpenAI chat model integration.
chat_models.base.OpenAIRefusalError
Error raised when OpenAI Structured Outputs API returns a refusal.
langchain_openai.embeddings¶
Classes¶
embeddings.azure.AzureOpenAIEmbeddings
AzureOpenAI embedding model integration.
embeddings.base.OpenAIEmbeddings
OpenAI embedding model integration.
langchain_openai.llms¶
Classes¶
llms.azure.AzureOpenAI
Azure-specific OpenAI large language models.
llms.base.BaseOpenAI
Base OpenAI large language model class.
llms.base.OpenAI
OpenAI completion model integration. | https://api.python.langchain.com/en/latest/openai_api_reference.html |
e357e165f01c-0 | langchain_milvus 0.1.5¶
langchain_milvus.retrievers¶
Classes¶
retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever
Hybrid search retriever that uses Milvus Collection to retrieve documents based on multiple fields.
retrievers.zilliz_cloud_pipeline_retriever.ZillizCloudPipelineRetriever
Zilliz Cloud Pipeline retriever.
langchain_milvus.utils¶
Classes¶
utils.sparse.BM25SparseEmbedding(corpus[, ...])
Sparse embedding model based on BM25.
utils.sparse.BaseSparseEmbedding()
Interface for Sparse embedding models.
langchain_milvus.vectorstores¶
Classes¶
vectorstores.milvus.Milvus(embedding_function)
Milvus vector store integration.
vectorstores.zilliz.Zilliz(embedding_function)
Zilliz vector store.
Functions¶
vectorstores.milvus.cosine_similarity(X, Y)
Row-wise cosine similarity between two equal-width matrices.
vectorstores.milvus.maximal_marginal_relevance(...)
Calculate maximal marginal relevance. | https://api.python.langchain.com/en/latest/milvus_api_reference.html |
4151d5d584fc-0 | langchain_core 0.2.38¶
langchain_core.agents¶
Schema definitions for representing agent actions, observations, and return values.
ATTENTION The schema definitions are provided for backwards compatibility.
New agents should be built using the langgraph library
(https://github.com/langchain-ai/langgraph)), which provides a simpler
and more flexible way to define agents.
Please see the migration guide for information on how to migrate existing
agents to modern langgraph agents:
https://python.langchain.com/v0.2/docs/how_to/migrate_agent/
Agents use language models to choose a sequence of actions to take.
A basic agent works in the following manner:
Given a prompt an agent uses an LLM to request an action to take (e.g., a tool to run).
The agent executes the action (e.g., runs the tool), and receives an observation.
The agent returns the observation to the LLM, which can then be used to generate the next action.
When the agent reaches a stopping condition, it returns a final return value.
The schemas for the agents themselves are defined in langchain.agents.agent.
Classes¶
agents.AgentAction
Represents a request to execute an action by an agent.
agents.AgentActionMessageLog
Representation of an action to be executed by an agent.
agents.AgentFinish
Final return value of an ActionAgent.
agents.AgentStep
Result of running an AgentAction.
langchain_core.beta¶
Some beta features that are not yet ready for production.
Classes¶
beta.runnables.context.Context()
Context for a runnable.
beta.runnables.context.ContextGet
beta.runnables.context.ContextSet
beta.runnables.context.PrefixContext([prefix])
Context for a runnable with a prefix.
Functions¶
beta.runnables.context.aconfig_with_context(...) | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-1 | Functions¶
beta.runnables.context.aconfig_with_context(...)
Asynchronously patch a runnable config with context getters and setters.
beta.runnables.context.config_with_context(...)
Patch a runnable config with context getters and setters.
langchain_core.caches¶
Warning
Beta Feature!
Cache provides an optional caching layer for LLMs.
Cache is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM
provider if you’re often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make
to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
Class hierarchy:
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
Classes¶
caches.BaseCache()
Interface for a caching layer for LLMs and Chat models.
caches.InMemoryCache(*[, maxsize])
Cache that stores things in memory.
langchain_core.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
callbacks.base.AsyncCallbackHandler()
Async callback handler for LangChain.
callbacks.base.BaseCallbackHandler()
Base callback handler for LangChain.
callbacks.base.BaseCallbackManager(handlers)
Base callback manager for LangChain.
callbacks.base.CallbackManagerMixin()
Mixin for callback manager.
callbacks.base.ChainManagerMixin()
Mixin for chain callbacks.
callbacks.base.LLMManagerMixin()
Mixin for LLM callbacks.
callbacks.base.RetrieverManagerMixin()
Mixin for Retriever callbacks.
callbacks.base.RunManagerMixin()
Mixin for run manager.
callbacks.base.ToolManagerMixin()
Mixin for tool callbacks.
callbacks.file.FileCallbackHandler(filename) | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-2 | Mixin for tool callbacks.
callbacks.file.FileCallbackHandler(filename)
Callback Handler that writes to a file.
callbacks.manager.AsyncCallbackManager(handlers)
Async callback manager that handles callbacks from LangChain.
callbacks.manager.AsyncCallbackManagerForChainGroup(...)
Async callback manager for the chain group.
callbacks.manager.AsyncCallbackManagerForChainRun(*, ...)
Async callback manager for chain run.
callbacks.manager.AsyncCallbackManagerForLLMRun(*, ...)
Async callback manager for LLM run.
callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, ...)
Async callback manager for retriever run.
callbacks.manager.AsyncCallbackManagerForToolRun(*, ...)
Async callback manager for tool run.
callbacks.manager.AsyncParentRunManager(*, ...)
Async Parent Run Manager.
callbacks.manager.AsyncRunManager(*, run_id, ...)
Async Run Manager.
callbacks.manager.BaseRunManager(*, run_id, ...)
Base class for run manager (a bound callback manager).
callbacks.manager.CallbackManager(handlers)
Callback manager for LangChain.
callbacks.manager.CallbackManagerForChainGroup(...)
Callback manager for the chain group.
callbacks.manager.CallbackManagerForChainRun(*, ...)
Callback manager for chain run.
callbacks.manager.CallbackManagerForLLMRun(*, ...)
Callback manager for LLM run.
callbacks.manager.CallbackManagerForRetrieverRun(*, ...)
Callback manager for retriever run.
callbacks.manager.CallbackManagerForToolRun(*, ...)
Callback manager for tool run.
callbacks.manager.ParentRunManager(*, ...[, ...])
Sync Parent Run Manager.
callbacks.manager.RunManager(*, run_id, ...)
Sync Run Manager.
callbacks.stdout.StdOutCallbackHandler([color])
Callback Handler that prints to std out.
callbacks.streaming_stdout.StreamingStdOutCallbackHandler()
Callback handler for streaming. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-3 | callbacks.streaming_stdout.StreamingStdOutCallbackHandler()
Callback handler for streaming.
Functions¶
callbacks.manager.adispatch_custom_event(...)
Dispatch an adhoc event to the handlers.
callbacks.manager.ahandle_event(handlers, ...)
Async generic event handler for AsyncCallbackManager.
callbacks.manager.atrace_as_chain_group(...)
Get an async callback manager for a chain group in a context manager.
callbacks.manager.dispatch_custom_event(...)
Dispatch an adhoc event.
callbacks.manager.handle_event(handlers, ...)
Generic event handler for CallbackManager.
callbacks.manager.shielded(func)
Makes so an awaitable method is always shielded from cancellation.
callbacks.manager.trace_as_chain_group(...)
Get a callback manager for a chain group in a context manager.
langchain_core.chat_history¶
Chat message history stores a history of the message interactions in a chat.
Class hierarchy:
BaseChatMessageHistory --> <name>ChatMessageHistory # Examples: FileChatMessageHistory, PostgresChatMessageHistory
Main helpers:
AIMessage, HumanMessage, BaseMessage
Classes¶
chat_history.BaseChatMessageHistory()
Abstract base class for storing chat message history.
chat_history.InMemoryChatMessageHistory
In memory implementation of chat message history.
langchain_core.chat_loaders¶
Classes¶
chat_loaders.BaseChatLoader()
Base class for chat loaders.
langchain_core.chat_sessions¶
Chat Sessions are a collection of messages and function calls.
Classes¶
chat_sessions.ChatSession
Chat Session represents a single conversation, channel, or other group of messages.
langchain_core.document_loaders¶
Classes¶
document_loaders.base.BaseBlobParser()
Abstract interface for blob parsers.
document_loaders.base.BaseLoader()
Interface for Document Loader.
document_loaders.blob_loaders.BlobLoader()
Abstract interface for blob loaders implementation. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-4 | document_loaders.blob_loaders.BlobLoader()
Abstract interface for blob loaders implementation.
document_loaders.langsmith.LangSmithLoader(*)
Load LangSmith Dataset examples as Documents.
langchain_core.documents¶
Document module is a collection of classes that handle documents
and their transformations.
Classes¶
documents.base.BaseMedia
Use to represent media content.
documents.base.Blob
Blob represents raw data by either reference or value.
documents.base.Document
Class for storing a piece of text and associated metadata.
documents.compressor.BaseDocumentCompressor
Base class for document compressors.
documents.transformers.BaseDocumentTransformer()
Abstract base class for document transformation.
langchain_core.embeddings¶
Classes¶
embeddings.embeddings.Embeddings()
Interface for embedding models.
embeddings.fake.DeterministicFakeEmbedding
Deterministic fake embedding model for unit testing purposes.
embeddings.fake.FakeEmbeddings
Fake embedding model for unit testing purposes.
langchain_core.example_selectors¶
Example selector implements logic for selecting examples to include them
in prompts.
This allows us to select examples that are most relevant to the input.
Classes¶
example_selectors.base.BaseExampleSelector()
Interface for selecting examples to include in prompts.
example_selectors.length_based.LengthBasedExampleSelector
Select examples based on length.
example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector
Select examples based on Max Marginal Relevance.
example_selectors.semantic_similarity.SemanticSimilarityExampleSelector
Select examples based on semantic similarity.
Functions¶
example_selectors.semantic_similarity.sorted_values(values)
Return a list of values in dict sorted by key.
langchain_core.exceptions¶
Custom exceptions for LangChain.
Classes¶
exceptions.LangChainException
General LangChain exception.
exceptions.OutputParserException(error[, ...])
Exception that output parsers should raise to signify a parsing error. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-5 | Exception that output parsers should raise to signify a parsing error.
exceptions.TracerException
Base class for exceptions in tracers module.
langchain_core.globals¶
Global values and configuration that apply to all of LangChain.
Functions¶
globals.get_debug()
Get the value of the debug global setting.
globals.get_llm_cache()
Get the value of the llm_cache global setting.
globals.get_verbose()
Get the value of the verbose global setting.
globals.set_debug(value)
Set a new value for the debug global setting.
globals.set_llm_cache(value)
Set a new LLM cache, overwriting the previous value, if any.
globals.set_verbose(value)
Set a new value for the verbose global setting.
langchain_core.graph_vectorstores¶
Classes¶
graph_vectorstores.base.GraphVectorStore(...)
graph_vectorstores.base.GraphVectorStoreRetriever
Retriever class for GraphVectorStore.
graph_vectorstores.base.Node
graph_vectorstores.links.Link(kind, ...)
Functions¶
graph_vectorstores.base.nodes_to_documents(nodes)
graph_vectorstores.links.add_links(doc, *links)
graph_vectorstores.links.copy_with_links(...)
graph_vectorstores.links.get_links(doc)
langchain_core.indexing¶
Code to help indexing data into a vectorstore.
This package contains helper logic to help deal with indexing data into
a vectorstore while avoiding duplicated content and over-writing content
if it’s unchanged.
Classes¶
indexing.api.IndexingResult
Return a detailed a breakdown of the result of the indexing operation.
indexing.base.DeleteResponse
A generic response for delete operation.
indexing.base.DocumentIndex
indexing.base.InMemoryRecordManager(namespace)
An in-memory record manager for testing purposes.
indexing.base.RecordManager(namespace)
Abstract base class representing the interface for a record manager. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-6 | indexing.base.RecordManager(namespace)
Abstract base class representing the interface for a record manager.
indexing.base.UpsertResponse
A generic response for upsert operations.
indexing.in_memory.InMemoryDocumentIndex
Functions¶
indexing.api.aindex(docs_source, ...[, ...])
Async index data from the loader into the vector store.
indexing.api.index(docs_source, ...[, ...])
Index data from the loader into the vector store.
langchain_core.language_models¶
Language Model is a type of model that can generate text or complete
text prompts.
LangChain has two main classes to work with language models: Chat Models
and “old-fashioned” LLMs.
## Chat Models
Language models that use a sequence of messages as inputs and return chat messages
as outputs (as opposed to using plain text). These are traditionally newer models (
older models are generally LLMs, see below). Chat models support the assignment of
distinct roles to conversation messages, helping to distinguish messages from the AI,
users, and instructions such as system messages.
The key abstraction for chat models is BaseChatModel. Implementations
should inherit from this class. Please see LangChain how-to guides with more
information on how to implement a custom chat model.
To implement a custom Chat Model, inherit from BaseChatModel. See
the following guide for more information on how to implement a custom Chat Model:
https://python.langchain.com/v0.2/docs/how_to/custom_chat_model/
## LLMs
Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are Chat Models, see below).
Although the underlying models are string in, string out, the LangChain wrappers
also allow these models to take messages as input. This gives them the same interface | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-7 | also allow these models to take messages as input. This gives them the same interface
as Chat Models. When messages are passed in as input, they will be formatted into a
string under the hood before being passed to the underlying model.
To implement a custom LLM, inherit from BaseLLM or LLM.
Please see the following guide for more information on how to implement a custom LLM:
https://python.langchain.com/v0.2/docs/how_to/custom_llm/
Classes¶
language_models.base.BaseLanguageModel
Abstract base class for interfacing with language models.
language_models.base.LangSmithParams
LangSmith parameters for tracing.
language_models.chat_models.BaseChatModel
Base class for chat models.
language_models.chat_models.SimpleChatModel
Simplified implementation for a chat model to inherit from.
language_models.fake.FakeListLLM
Fake LLM for testing purposes.
language_models.fake.FakeListLLMError
Fake error for testing purposes.
language_models.fake.FakeStreamingListLLM
Fake streaming list LLM for testing purposes.
language_models.fake_chat_models.FakeChatModel
Fake Chat Model wrapper for testing purposes.
language_models.fake_chat_models.FakeListChatModel
Fake ChatModel for testing purposes.
language_models.fake_chat_models.FakeListChatModelError
language_models.fake_chat_models.FakeMessagesListChatModel
Fake ChatModel for testing purposes.
language_models.fake_chat_models.GenericFakeChatModel
Generic fake chat model that can be used to test the chat model interface.
language_models.fake_chat_models.ParrotFakeChatModel
Generic fake chat model that can be used to test the chat model interface.
language_models.llms.BaseLLM
Base LLM abstract interface.
language_models.llms.LLM
Simple interface for implementing a custom LLM.
Functions¶
language_models.chat_models.agenerate_from_stream(stream) | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-8 | Functions¶
language_models.chat_models.agenerate_from_stream(stream)
Async generate from a stream.
language_models.chat_models.generate_from_stream(stream)
Generate from a stream.
language_models.llms.aget_prompts(params, ...)
Get prompts that are already cached.
language_models.llms.aupdate_cache(cache, ...)
Update the cache and get the LLM output.
language_models.llms.create_base_retry_decorator(...)
Create a retry decorator for a given LLM and provided
language_models.llms.get_prompts(params, prompts)
Get prompts that are already cached.
language_models.llms.update_cache(cache, ...)
Update the cache and get the LLM output.
langchain_core.load¶
Load module helps with serialization and deserialization.
Classes¶
load.load.Reviver([secrets_map, ...])
Reviver for JSON objects.
load.serializable.BaseSerialized
Base class for serialized objects.
load.serializable.Serializable
Serializable base class.
load.serializable.SerializedConstructor
Serialized constructor.
load.serializable.SerializedNotImplemented
Serialized not implemented.
load.serializable.SerializedSecret
Serialized secret.
Functions¶
load.dump.default(obj)
Return a default value for a Serializable object or a SerializedNotImplemented object.
load.dump.dumpd(obj)
Return a dict representation of an object.
load.dump.dumps(obj, *[, pretty])
Return a json string representation of an object.
load.load.load(obj, *[, secrets_map, ...])
load.load.loads(text, *[, secrets_map, ...])
load.serializable.to_json_not_implemented(obj)
Serialize a "not implemented" object.
load.serializable.try_neq_default(value, ...)
Try to determine if a value is different from the default.
langchain_core.memory¶ | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-9 | Try to determine if a value is different from the default.
langchain_core.memory¶
Memory maintains Chain state, incorporating context from past runs.
Class hierarchy for Memory:
BaseMemory --> <name>Memory --> <name>Memory # Examples: BaseChatMemory -> MotorheadMemory
Classes¶
memory.BaseMemory
Abstract base class for memory in Chains.
langchain_core.messages¶
Messages are objects used in prompts and chat conversations.
Class hierarchy:
BaseMessage --> SystemMessage, AIMessage, HumanMessage, ChatMessage, FunctionMessage, ToolMessage
--> BaseMessageChunk --> SystemMessageChunk, AIMessageChunk, HumanMessageChunk, ChatMessageChunk, FunctionMessageChunk, ToolMessageChunk
Main helpers:
ChatPromptTemplate
Classes¶
messages.ai.AIMessage
Message from an AI.
messages.ai.AIMessageChunk
Message chunk from an AI.
messages.ai.UsageMetadata
Usage metadata for a message, such as token counts.
messages.base.BaseMessage
Base abstract message class.
messages.base.BaseMessageChunk
Message chunk, which can be concatenated with other Message chunks.
messages.chat.ChatMessage
Message that can be assigned an arbitrary speaker (i.e.
messages.chat.ChatMessageChunk
Chat Message chunk.
messages.function.FunctionMessage
Message for passing the result of executing a tool back to a model.
messages.function.FunctionMessageChunk
Function Message chunk.
messages.human.HumanMessage
Message from a human.
messages.human.HumanMessageChunk
Human Message chunk.
messages.modifier.RemoveMessage
messages.system.SystemMessage
Message for priming AI behavior.
messages.system.SystemMessageChunk
System Message chunk.
messages.tool.InvalidToolCall
Allowance for errors made by LLM.
messages.tool.ToolCall
Represents a request to call a tool.
messages.tool.ToolCallChunk | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-10 | Represents a request to call a tool.
messages.tool.ToolCallChunk
A chunk of a tool call (e.g., as part of a stream).
messages.tool.ToolMessage
Message for passing the result of executing a tool back to a model.
messages.tool.ToolMessageChunk
Tool Message chunk.
Functions¶
messages.ai.add_ai_message_chunks(left, *others)
Add multiple AIMessageChunks together.
messages.base.get_msg_title_repr(title, *[, ...])
Get a title representation for a message.
messages.base.merge_content(first_content, ...)
Merge two message contents.
messages.base.message_to_dict(message)
Convert a Message to a dictionary.
messages.base.messages_to_dict(messages)
Convert a sequence of Messages to a list of dictionaries.
messages.tool.default_tool_chunk_parser(...)
Best-effort parsing of tool chunks.
messages.tool.default_tool_parser(raw_tool_calls)
Best-effort parsing of tools.
messages.tool.invalid_tool_call(*[, name, ...])
messages.tool.tool_call(*, name, args, id)
messages.tool.tool_call_chunk(*[, name, ...])
messages.utils.convert_to_messages(messages)
Convert a sequence of messages to a list of messages.
messages.utils.filter_messages([messages])
Filter messages based on name, type or id.
messages.utils.get_buffer_string(messages[, ...])
Convert a sequence of Messages to strings and concatenate them into one string.
messages.utils.merge_message_runs([messages])
Merge consecutive Messages of the same type.
messages.utils.message_chunk_to_message(chunk)
Convert a message chunk to a message.
messages.utils.messages_from_dict(messages)
Convert a sequence of messages from dicts to Message objects.
messages.utils.trim_messages([messages])
Trim messages to be below a token count.
langchain_core.output_parsers¶ | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-11 | Trim messages to be below a token count.
langchain_core.output_parsers¶
OutputParser classes parse the output of an LLM call.
Class hierarchy:
BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser # ListOutputParser, PydanticOutputParser
Main helpers:
Serializable, Generation, PromptValue
Classes¶
output_parsers.base.BaseGenerationOutputParser
Base class to parse the output of an LLM call.
output_parsers.base.BaseLLMOutputParser()
Abstract base class for parsing the outputs of a model.
output_parsers.base.BaseOutputParser
Base class to parse the output of an LLM call.
output_parsers.json.JsonOutputParser
Parse the output of an LLM call to a JSON object.
output_parsers.json.SimpleJsonOutputParser
alias of JsonOutputParser
output_parsers.list.CommaSeparatedListOutputParser
Parse the output of an LLM call to a comma-separated list.
output_parsers.list.ListOutputParser
Parse the output of an LLM call to a list.
output_parsers.list.MarkdownListOutputParser
Parse a Markdown list.
output_parsers.list.NumberedListOutputParser
Parse a numbered list.
output_parsers.openai_functions.JsonKeyOutputFunctionsParser
Parse an output as the element of the Json object.
output_parsers.openai_functions.JsonOutputFunctionsParser
Parse an output as the Json object.
output_parsers.openai_functions.OutputFunctionsParser
Parse an output that is one of sets of values.
output_parsers.openai_functions.PydanticAttrOutputFunctionsParser
Parse an output as an attribute of a pydantic object.
output_parsers.openai_functions.PydanticOutputFunctionsParser
Parse an output as a pydantic object.
output_parsers.openai_tools.JsonOutputKeyToolsParser
Parse tools from OpenAI response. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-12 | output_parsers.openai_tools.JsonOutputKeyToolsParser
Parse tools from OpenAI response.
output_parsers.openai_tools.JsonOutputToolsParser
Parse tools from OpenAI response.
output_parsers.openai_tools.PydanticToolsParser
Parse tools from OpenAI response.
output_parsers.pydantic.PydanticOutputParser
Parse an output using a pydantic model.
output_parsers.string.StrOutputParser
OutputParser that parses LLMResult into the top likely string.
output_parsers.transform.BaseCumulativeTransformOutputParser
Base class for an output parser that can handle streaming input.
output_parsers.transform.BaseTransformOutputParser
Base class for an output parser that can handle streaming input.
output_parsers.xml.XMLOutputParser
Parse an output using xml format.
Functions¶
output_parsers.list.droplastn(iter, n)
Drop the last n elements of an iterator.
output_parsers.openai_tools.make_invalid_tool_call(...)
Create an InvalidToolCall from a raw tool call.
output_parsers.openai_tools.parse_tool_call(...)
Parse a single tool call.
output_parsers.openai_tools.parse_tool_calls(...)
Parse a list of tool calls.
output_parsers.xml.nested_element(path, elem)
Get nested element from path.
langchain_core.outputs¶
Output classes are used to represent the output of a language model call
and the output of a chat.
The top container for information is the LLMResult object. LLMResult is used by
both chat models and LLMs. This object contains the output of the language
model and any additional information that the model provider wants to return.
When invoking models via the standard runnable methods (e.g. invoke, batch, etc.):
- Chat models will return AIMessage objects.
- LLMs will return regular text strings. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-13 | - LLMs will return regular text strings.
In addition, users can access the raw output of either LLMs or chat models via
callbacks. The on_chat_model_end and on_llm_end callbacks will return an
LLMResult object containing the generated outputs and any additional information
returned by the model provider.
In general, if information is already available
in the AIMessage object, it is recommended to access it from there rather than
from the LLMResult object.
Classes¶
outputs.chat_generation.ChatGeneration
A single chat generation output.
outputs.chat_generation.ChatGenerationChunk
ChatGeneration chunk, which can be concatenated with other ChatGeneration chunks.
outputs.chat_result.ChatResult
Use to represent the result of a chat model call with a single prompt.
outputs.generation.Generation
A single text generation output.
outputs.generation.GenerationChunk
Generation chunk, which can be concatenated with other Generation chunks.
outputs.llm_result.LLMResult
A container for results of an LLM call.
outputs.run_info.RunInfo
Class that contains metadata for a single execution of a Chain or model.
langchain_core.prompt_values¶
Prompt values for language model prompts.
Prompt values are used to represent different pieces of prompts.
They can be used to represent text, images, or chat message pieces.
Classes¶
prompt_values.ChatPromptValue
Chat prompt value.
prompt_values.ChatPromptValueConcrete
Chat prompt value which explicitly lists out the message types it accepts.
prompt_values.ImagePromptValue
Image prompt value.
prompt_values.ImageURL
Image URL.
prompt_values.PromptValue
Base abstract class for inputs to any language model.
prompt_values.StringPromptValue
String prompt value.
langchain_core.prompts¶
Prompt is the input to the model.
Prompt is often constructed
from multiple components and prompt values. Prompt classes and functions make constructing
and working with prompts easy. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-14 | and working with prompts easy.
Class hierarchy:
BasePromptTemplate --> PipelinePromptTemplate
StringPromptTemplate --> PromptTemplate
FewShotPromptTemplate
FewShotPromptWithTemplates
BaseChatPromptTemplate --> AutoGPTPrompt
ChatPromptTemplate --> AgentScratchPadChatPromptTemplate
BaseMessagePromptTemplate --> MessagesPlaceholder
BaseStringMessagePromptTemplate --> ChatMessagePromptTemplate
HumanMessagePromptTemplate
AIMessagePromptTemplate
SystemMessagePromptTemplate
Classes¶
prompts.base.BasePromptTemplate
Base class for all prompt templates, returning a prompt.
prompts.chat.AIMessagePromptTemplate
AI message prompt template.
prompts.chat.BaseChatPromptTemplate
Base class for chat prompt templates.
prompts.chat.BaseMessagePromptTemplate
Base class for message prompt templates.
prompts.chat.BaseStringMessagePromptTemplate
Base class for message prompt templates that use a string prompt template.
prompts.chat.ChatMessagePromptTemplate
Chat message prompt template.
prompts.chat.ChatPromptTemplate
Prompt template for chat models.
prompts.chat.HumanMessagePromptTemplate
Human message prompt template.
prompts.chat.MessagesPlaceholder
Prompt template that assumes variable is already list of messages.
prompts.chat.SystemMessagePromptTemplate
System message prompt template.
prompts.few_shot.FewShotChatMessagePromptTemplate
Chat prompt template that supports few-shot examples.
prompts.few_shot.FewShotPromptTemplate
Prompt template that contains few shot examples.
prompts.few_shot_with_templates.FewShotPromptWithTemplates
Prompt template that contains few shot examples.
prompts.image.ImagePromptTemplate
Image prompt template for a multimodal model.
prompts.pipeline.PipelinePromptTemplate
Prompt template for composing multiple prompt templates together.
prompts.prompt.PromptTemplate
Prompt template for a language model.
prompts.string.StringPromptTemplate | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-15 | Prompt template for a language model.
prompts.string.StringPromptTemplate
String prompt that exposes the format method, returning a prompt.
prompts.structured.StructuredPrompt
Functions¶
prompts.base.aformat_document(doc, prompt)
Async format a document into a string based on a prompt template.
prompts.base.format_document(doc, prompt)
Format a document into a string based on a prompt template.
prompts.loading.load_prompt(path[, encoding])
Unified method for loading a prompt from LangChainHub or local fs.
prompts.loading.load_prompt_from_config(config)
Load prompt from Config Dict.
prompts.string.check_valid_template(...)
Check that template string is valid.
prompts.string.get_template_variables(...)
Get the variables from the template.
prompts.string.jinja2_formatter(template, ...)
Format a template using jinja2.
prompts.string.mustache_formatter(template, ...)
Format a template using mustache.
prompts.string.mustache_schema(template)
Get the variables from a mustache template.
prompts.string.mustache_template_vars(template)
Get the variables from a mustache template.
prompts.string.validate_jinja2(template, ...)
Validate that the input variables are valid for the template.
langchain_core.rate_limiters¶
Interface for a rate limiter and an in-memory rate limiter.
Classes¶
rate_limiters.BaseRateLimiter(*args, **kwargs)
rate_limiters.InMemoryRateLimiter(*[, ...])
langchain_core.retrievers¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to
store documents, only to return (or retrieve) it. Vector stores can be used as
the backbone of a retriever, but there are other types of retrievers as well. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-16 | the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever
Main helpers:
RetrieverInput, RetrieverOutput, RetrieverLike, RetrieverOutputLike,
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes¶
retrievers.BaseRetriever
Abstract base class for a Document retrieval system.
retrievers.LangSmithRetrieverParams
LangSmith parameters for tracing.
langchain_core.runnables¶
LangChain Runnable and the LangChain Expression Language (LCEL).
The LangChain Expression Language (LCEL) offers a declarative method to build
production-grade programs that harness the power of LLMs.
Programs created using LCEL and LangChain Runnables inherently support
synchronous, asynchronous, batch, and streaming operations.
Support for async allows servers hosting LCEL based programs to scale better
for higher concurrent loads.
Batch operations allow for processing multiple inputs in parallel.
Streaming of intermediate outputs, as they’re being generated, allows for
creating more responsive UX.
This module contains schema and implementation of LangChain Runnables primitives.
Classes¶
runnables.base.Runnable()
A unit of work that can be invoked, batched, streamed, transformed and composed.
runnables.base.RunnableBinding
Wrap a Runnable with additional functionality.
runnables.base.RunnableBindingBase
Runnable that delegates calls to another Runnable with a set of kwargs.
runnables.base.RunnableEach
Runnable that delegates calls to another Runnable with each element of the input sequence.
runnables.base.RunnableEachBase | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-17 | runnables.base.RunnableEachBase
Runnable that delegates calls to another Runnable with each element of the input sequence.
runnables.base.RunnableGenerator(transform)
Runnable that runs a generator function.
runnables.base.RunnableLambda(func[, afunc, ...])
RunnableLambda converts a python callable into a Runnable.
runnables.base.RunnableMap
alias of RunnableParallel
runnables.base.RunnableParallel
Runnable that runs a mapping of Runnables in parallel, and returns a mapping of their outputs.
runnables.base.RunnableSequence
Sequence of Runnables, where the output of each is the input of the next.
runnables.base.RunnableSerializable
Runnable that can be serialized to JSON.
runnables.branch.RunnableBranch
Runnable that selects which branch to run based on a condition.
runnables.config.ContextThreadPoolExecutor([...])
ThreadPoolExecutor that copies the context to the child thread.
runnables.config.EmptyDict
Empty dict type.
runnables.config.RunnableConfig
Configuration for a Runnable.
runnables.configurable.DynamicRunnable
Serializable Runnable that can be dynamically configured.
runnables.configurable.RunnableConfigurableAlternatives
Runnable that can be dynamically configured.
runnables.configurable.RunnableConfigurableFields
Runnable that can be dynamically configured.
runnables.configurable.StrEnum(value)
String enum.
runnables.fallbacks.RunnableWithFallbacks
Runnable that can fallback to other Runnables if it fails.
runnables.graph.Branch(condition, ends)
Branch in a graph.
runnables.graph.CurveStyle(value)
Enum for different curve styles supported by Mermaid
runnables.graph.Edge(source, target[, data, ...])
Edge in a graph.
runnables.graph.Graph(nodes, ...)
Graph of nodes and edges. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-18 | runnables.graph.Graph(nodes, ...)
Graph of nodes and edges.
runnables.graph.LabelsDict
Dictionary of labels for nodes and edges in a graph.
runnables.graph.MermaidDrawMethod(value)
Enum for different draw methods supported by Mermaid
runnables.graph.Node(id, name, data, metadata)
Node in a graph.
runnables.graph.NodeStyles([default, first, ...])
Schema for Hexadecimal color codes for different node types.
runnables.graph.Stringifiable(*args, **kwargs)
runnables.graph_ascii.AsciiCanvas(cols, lines)
Class for drawing in ASCII.
runnables.graph_ascii.VertexViewer(name)
Class to define vertex box boundaries that will be accounted for during graph building by grandalf.
runnables.graph_png.PngDrawer([fontname, labels])
Helper class to draw a state graph into a PNG file.
runnables.history.RunnableWithMessageHistory
Runnable that manages chat message history for another Runnable.
runnables.passthrough.RunnableAssign
Runnable that assigns key-value pairs to Dict[str, Any] inputs.
runnables.passthrough.RunnablePassthrough
Runnable to passthrough inputs unchanged or with additional keys.
runnables.passthrough.RunnablePick
Runnable that picks keys from Dict[str, Any] inputs.
runnables.retry.RunnableRetry
Retry a Runnable if it fails.
runnables.router.RouterInput
Router input.
runnables.router.RouterRunnable
Runnable that routes to a set of Runnables based on Input['key'].
runnables.schema.BaseStreamEvent
Streaming event.
runnables.schema.CustomStreamEvent
Custom stream event created by the user.
runnables.schema.EventData
Data associated with a streaming event.
runnables.schema.StandardStreamEvent | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-19 | Data associated with a streaming event.
runnables.schema.StandardStreamEvent
A standard stream event that follows LangChain convention for event data.
runnables.utils.AddableDict
Dictionary that can be added to another dictionary.
runnables.utils.ConfigurableField(id[, ...])
Field that can be configured by the user.
runnables.utils.ConfigurableFieldMultiOption(id, ...)
Field that can be configured by the user with multiple default values.
runnables.utils.ConfigurableFieldSingleOption(id, ...)
Field that can be configured by the user with a default value.
runnables.utils.ConfigurableFieldSpec(id, ...)
Field that can be configured by the user.
runnables.utils.FunctionNonLocals()
Get the nonlocal variables accessed of a function.
runnables.utils.GetLambdaSource()
Get the source code of a lambda function.
runnables.utils.IsFunctionArgDict()
Check if the first argument of a function is a dict.
runnables.utils.IsLocalDict(name, keys)
Check if a name is a local dict.
runnables.utils.NonLocals()
Get nonlocal variables accessed.
runnables.utils.SupportsAdd(*args, **kwargs)
Protocol for objects that support addition.
Functions¶
runnables.base.chain()
Decorate a function to make it a Runnable.
runnables.base.coerce_to_runnable(thing)
Coerce a Runnable-like object into a Runnable.
runnables.config.acall_func_with_variable_args(...)
Async call function that may optionally accept a run_manager and/or config.
runnables.config.call_func_with_variable_args(...)
Call function that may optionally accept a run_manager and/or config.
runnables.config.ensure_config([config])
Ensure that a config is a dict with all keys present. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-20 | Ensure that a config is a dict with all keys present.
runnables.config.get_async_callback_manager_for_config(config)
Get an async callback manager for a config.
runnables.config.get_callback_manager_for_config(config)
Get a callback manager for a config.
runnables.config.get_config_list(config, length)
Get a list of configs from a single config or a list of configs.
runnables.config.get_executor_for_config(config)
Get an executor for a config.
runnables.config.merge_configs(*configs)
Merge multiple configs into one.
runnables.config.patch_config(config, *[, ...])
Patch a config with new values.
runnables.config.run_in_executor(...)
Run a function in an executor.
runnables.configurable.make_options_spec(...)
Make a ConfigurableFieldSpec for a ConfigurableFieldSingleOption or ConfigurableFieldMultiOption.
runnables.configurable.prefix_config_spec(...)
Prefix the id of a ConfigurableFieldSpec.
runnables.graph.is_uuid(value)
Check if a string is a valid UUID.
runnables.graph.node_data_json(node, *[, ...])
Convert the data of a node to a JSON-serializable format.
runnables.graph.node_data_str(id, data)
Convert the data of a node to a string.
runnables.graph_ascii.draw_ascii(vertices, edges)
Build a DAG and draw it in ASCII.
runnables.graph_mermaid.draw_mermaid(nodes, ...)
Draws a Mermaid graph using the provided graph data.
runnables.graph_mermaid.draw_mermaid_png(...)
Draws a Mermaid graph as PNG using provided syntax.
runnables.passthrough.aidentity(x)
Async identity function.
runnables.passthrough.identity(x)
Identity function. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-21 | Async identity function.
runnables.passthrough.identity(x)
Identity function.
runnables.utils.aadd(addables)
Asynchronously add a sequence of addable objects together.
runnables.utils.accepts_config(callable)
Check if a callable accepts a config argument.
runnables.utils.accepts_context(callable)
Check if a callable accepts a context argument.
runnables.utils.accepts_run_manager(callable)
Check if a callable accepts a run_manager argument.
runnables.utils.add(addables)
Add a sequence of addable objects together.
runnables.utils.create_model(__model_name, ...)
Create a pydantic model with the given field definitions.
runnables.utils.gated_coro(semaphore, coro)
Run a coroutine with a semaphore.
runnables.utils.gather_with_concurrency(n, ...)
Gather coroutines with a limit on the number of concurrent coroutines.
runnables.utils.get_function_first_arg_dict_keys(func)
Get the keys of the first argument of a function if it is a dict.
runnables.utils.get_function_nonlocals(func)
Get the nonlocal variables accessed by a function.
runnables.utils.get_lambda_source(func)
Get the source code of a lambda function.
runnables.utils.get_unique_config_specs(specs)
Get the unique config specs from a sequence of config specs.
runnables.utils.indent_lines_after_first(...)
Indent all lines of text after the first line.
runnables.utils.is_async_callable(func)
Check if a function is async.
runnables.utils.is_async_generator(func)
Check if a function is an async generator.
langchain_core.stores¶
Store implements the key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform
to a simple key-value interface. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-22 | Module provides implementations of various key-value stores that conform
to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Classes¶
stores.BaseStore()
Abstract interface for a key-value store.
stores.InMemoryBaseStore()
In-memory implementation of the BaseStore using a dictionary.
stores.InMemoryByteStore()
In-memory store for bytes.
stores.InMemoryStore()
In-memory store for any type of data.
stores.InvalidKeyException
Raised when a key is invalid; e.g., uses incorrect characters.
langchain_core.structured_query¶
Internal representation of a structured query language.
Classes¶
structured_query.Comparator(value)
Enumerator of the comparison operators.
structured_query.Comparison
Comparison to a value.
structured_query.Expr
Base class for all expressions.
structured_query.FilterDirective
Filtering expression.
structured_query.Operation
Logical operation over other directives.
structured_query.Operator(value)
Enumerator of the operations.
structured_query.StructuredQuery
Structured query.
structured_query.Visitor()
Defines interface for IR translation using a visitor pattern.
langchain_core.sys_info¶
sys_info prints information about the system and langchain packages
for debugging purposes.
Functions¶
sys_info.print_sys_info(*[, additional_pkgs])
Print information about the environment for debugging purposes.
langchain_core.tools¶
Tools are classes that an Agent uses to interact with the world.
Each tool has a description. Agent uses the description to choose the right
tool for the job.
Class hierarchy:
RunnableSerializable --> BaseTool --> <name>Tool # Examples: AIPluginTool, BaseGraphQLTool
<name> # Examples: BraveSearch, HumanInputRun
Main helpers:
CallbackManagerForToolRun, AsyncCallbackManagerForToolRun
Classes¶
tools.base.BaseTool
Interface LangChain tools must implement. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-23 | Classes¶
tools.base.BaseTool
Interface LangChain tools must implement.
tools.base.BaseToolkit
Base Toolkit representing a collection of related tools.
tools.base.InjectedToolArg()
Annotation for a Tool arg that is not meant to be generated by a model.
tools.base.SchemaAnnotationError
Raised when 'args_schema' is missing or has an incorrect type annotation.
tools.base.ToolException
Optional exception that tool throws when execution error occurs.
tools.retriever.RetrieverInput
Input to the retriever.
tools.simple.Tool
Tool that takes in function or coroutine directly.
tools.structured.StructuredTool
Tool that can operate on any number of inputs.
Functions¶
tools.base.create_schema_from_function(...)
Create a pydantic schema from a function's signature.
tools.convert.convert_runnable_to_tool(runnable)
Convert a Runnable into a BaseTool.
tools.convert.tool(*args[, return_direct, ...])
Make tools out of functions, can be used with or without arguments.
tools.render.render_text_description(tools)
Render the tool name and description in plain text.
tools.render.render_text_description_and_args(tools)
Render the tool name, description, and args in plain text.
tools.retriever.create_retriever_tool(...[, ...])
Create a tool to do retrieval of documents.
langchain_core.tracers¶
Tracers are classes for tracing runs.
Class hierarchy:
BaseCallbackHandler --> BaseTracer --> <name>Tracer # Examples: LangChainTracer, RootListenersTracer
--> <name> # Examples: LogStreamCallbackHandler
Classes¶
tracers.base.AsyncBaseTracer(*[, _schema_format])
Async Base interface for tracers.
tracers.base.BaseTracer(*[, _schema_format])
Base interface for tracers. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-24 | tracers.base.BaseTracer(*[, _schema_format])
Base interface for tracers.
tracers.evaluation.EvaluatorCallbackHandler(...)
Tracer that runs a run evaluator whenever a run is persisted.
tracers.event_stream.RunInfo
Information about a run.
tracers.langchain.LangChainTracer([...])
Implementation of the SharedTracer that POSTS to the LangChain endpoint.
tracers.log_stream.LogEntry
A single entry in the run log.
tracers.log_stream.LogStreamCallbackHandler(*)
Tracer that streams run logs to a stream.
tracers.log_stream.RunLog(*ops, state)
Run log.
tracers.log_stream.RunLogPatch(*ops)
Patch to the run log.
tracers.log_stream.RunState
State of the run.
tracers.root_listeners.AsyncRootListenersTracer(*, ...)
Async Tracer that calls listeners on run start, end, and error.
tracers.root_listeners.RootListenersTracer(*, ...)
Tracer that calls listeners on run start, end, and error.
tracers.run_collector.RunCollectorCallbackHandler([...])
Tracer that collects all nested runs in a list.
tracers.schemas.Run
Run schema for the V2 API in the Tracer.
tracers.stdout.ConsoleCallbackHandler(**kwargs)
Tracer that prints to the console.
tracers.stdout.FunctionCallbackHandler(...)
Tracer that calls a function with a single str parameter.
Functions¶
tracers.context.collect_runs()
Collect all run traces in context.
tracers.context.register_configure_hook(...)
Register a configure hook.
tracers.context.tracing_enabled([session_name])
Throw an error because this has been replaced by tracing_v2_enabled.
tracers.context.tracing_v2_enabled([...])
Instruct LangChain to log all runs in context to LangSmith. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-25 | Instruct LangChain to log all runs in context to LangSmith.
tracers.evaluation.wait_for_all_evaluators()
Wait for all tracers to finish.
tracers.langchain.get_client()
Get the client.
tracers.langchain.log_error_once(method, ...)
Log an error once.
tracers.langchain.wait_for_all_tracers()
Wait for all tracers to finish.
tracers.langchain_v1.LangChainTracerV1(...)
Throw an error because this has been replaced by LangChainTracer.
tracers.langchain_v1.get_headers(*args, **kwargs)
Throw an error because this has been replaced by get_headers.
tracers.stdout.elapsed(run)
Get the elapsed time of a run.
tracers.stdout.try_json_stringify(obj, fallback)
Try to stringify an object to JSON.
Deprecated classes¶
tracers.schemas.BaseRun
Deprecated since version 0.1.0: Use Run instead.
tracers.schemas.ChainRun
Deprecated since version 0.1.0: Use Run instead.
tracers.schemas.LLMRun
Deprecated since version 0.1.0: Use Run instead.
tracers.schemas.ToolRun
Deprecated since version 0.1.0: Use Run instead.
tracers.schemas.TracerSession
Deprecated since version 0.1.0.
tracers.schemas.TracerSessionBase
Deprecated since version 0.1.0.
tracers.schemas.TracerSessionV1
Deprecated since version 0.1.0.
tracers.schemas.TracerSessionV1Base
Deprecated since version 0.1.0.
tracers.schemas.TracerSessionV1Create
Deprecated since version 0.1.0.
Deprecated functions¶
tracers.schemas.RunTypeEnum() | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-26 | Deprecated functions¶
tracers.schemas.RunTypeEnum()
Deprecated since version 0.1.0: Use Use string instead. instead.
langchain_core.utils¶
Utility functions for LangChain.
These functions do not depend on any other LangChain module.
Classes¶
utils.aiter.NoLock()
Dummy lock that provides the proper interface but no protection.
utils.aiter.Tee(iterable[, n, lock])
Create n separate asynchronous iterators over iterable.
utils.aiter.aclosing(thing)
Async context manager for safely finalizing an asynchronously cleaned-up resource such as an async generator, calling its aclose() method.
utils.aiter.atee
alias of Tee
utils.formatting.StrictFormatter()
Formatter that checks for extra keys.
utils.function_calling.FunctionDescription
Representation of a callable function to send to an LLM.
utils.function_calling.ToolDescription
Representation of a callable function to the OpenAI API.
utils.iter.NoLock()
Dummy lock that provides the proper interface but no protection.
utils.iter.Tee(iterable[, n, lock])
Create n separate asynchronous iterators over iterable
utils.iter.safetee
alias of Tee
utils.mustache.ChevronError
Custom exception for Chevron errors.
Functions¶
utils.aiter.abatch_iterate(size, iterable)
Utility batching function for async iterables.
utils.aiter.py_anext(iterator[, default])
Pure-Python implementation of anext() for testing purposes.
utils.aiter.tee_peer(iterator, buffer, ...)
An individual iterator of a tee().
utils.env.env_var_is_set(env_var)
Check if an environment variable is set.
utils.env.get_from_dict_or_env(data, key, ...)
Get a value from a dictionary or an environment variable. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-27 | Get a value from a dictionary or an environment variable.
utils.env.get_from_env(key, env_key[, default])
Get a value from a dictionary or an environment variable.
utils.function_calling.convert_to_openai_function(...)
Convert a raw function/class to an OpenAI function.
utils.function_calling.convert_to_openai_tool(tool, *)
Convert a raw function/class to an OpenAI tool.
utils.function_calling.tool_example_to_messages(...)
Convert an example into a list of messages that can be fed into an LLM.
utils.html.extract_sub_links(raw_html, url, *)
Extract all links from a raw HTML string and convert into absolute paths.
utils.html.find_all_links(raw_html, *[, pattern])
Extract all links from a raw HTML string.
utils.image.encode_image(image_path)
Get base64 string from image URI.
utils.image.image_to_data_url(image_path)
Get data URL from image URI.
utils.input.get_bolded_text(text)
Get bolded text.
utils.input.get_color_mapping(items[, ...])
Get mapping for items to a support color.
utils.input.get_colored_text(text, color)
Get colored text.
utils.input.print_text(text[, color, end, file])
Print text with highlighting and no end characters.
utils.interactive_env.is_interactive_env()
Determine if running within IPython or Jupyter.
utils.iter.batch_iterate(size, iterable)
Utility batching function.
utils.iter.tee_peer(iterator, buffer, peers, ...)
An individual iterator of a tee().
utils.json.parse_and_check_json_markdown(...)
Parse a JSON string from a Markdown string and check that it contains the expected keys.
utils.json.parse_json_markdown(json_string, *)
Parse a JSON string from a Markdown string. | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-28 | Parse a JSON string from a Markdown string.
utils.json.parse_partial_json(s, *[, strict])
Parse a JSON string that may be missing closing braces.
utils.json_schema.dereference_refs(schema_obj, *)
Try to substitute $refs in JSON Schema.
utils.mustache.grab_literal(template, l_del)
Parse a literal from the template.
utils.mustache.l_sa_check(template, literal, ...)
Do a preliminary check to see if a tag could be a standalone.
utils.mustache.parse_tag(template, l_del, r_del)
Parse a tag from a template.
utils.mustache.r_sa_check(template, ...)
Do a final check to see if a tag could be a standalone.
utils.mustache.render([template, data, ...])
Render a mustache template.
utils.mustache.tokenize(template[, ...])
Tokenize a mustache template.
utils.pydantic.get_fields()
Get the field names of a Pydantic model.
utils.pydantic.get_pydantic_major_version()
Get the major version of Pydantic.
utils.pydantic.is_basemodel_instance(obj)
Check if the given class is an instance of Pydantic BaseModel.
utils.pydantic.is_basemodel_subclass(cls)
Check if the given class is a subclass of Pydantic BaseModel.
utils.pydantic.is_pydantic_v1_subclass(cls)
Check if the installed Pydantic version is 1.x-like.
utils.pydantic.is_pydantic_v2_subclass(cls)
Check if the installed Pydantic version is 1.x-like.
utils.pydantic.pre_init(func)
Decorator to run a function before model initialization.
utils.strings.comma_list(items)
Convert a list to a comma-separated string.
utils.strings.stringify_dict(data) | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-29 | Convert a list to a comma-separated string.
utils.strings.stringify_dict(data)
Stringify a dictionary.
utils.strings.stringify_value(val)
Stringify a value.
utils.utils.build_extra_kwargs(extra_kwargs, ...)
Build extra kwargs from values and extra_kwargs.
utils.utils.check_package_version(package[, ...])
Check the version of a package.
utils.utils.convert_to_secret_str(value)
Convert a string to a SecretStr if needed.
utils.utils.from_env()
Create a factory method that gets a value from an environment variable.
utils.utils.get_pydantic_field_names(...)
Get field names, including aliases, for a pydantic class.
utils.utils.guard_import(module_name, *[, ...])
Dynamically import a module and raise an exception if the module is not installed.
utils.utils.mock_now(dt_value)
Context manager for mocking out datetime.now() in unit tests.
utils.utils.raise_for_status_with_text(response)
Raise an error with the response text.
utils.utils.secret_from_env()
Secret from env.
utils.utils.xor_args(*arg_groups)
Validate specified keyword args are mutually exclusive."
Deprecated functions¶
utils.function_calling.convert_pydantic_to_openai_function(...)
Deprecated since version 0.1.16: Use langchain_core.utils.function_calling.convert_to_openai_function() instead.
utils.function_calling.convert_pydantic_to_openai_tool(...)
Deprecated since version 0.1.16: Use langchain_core.utils.function_calling.convert_to_openai_tool() instead.
utils.function_calling.convert_python_function_to_openai_function(...)
Deprecated since version 0.1.16: Use langchain_core.utils.function_calling.convert_to_openai_function() instead.
utils.function_calling.format_tool_to_openai_function(tool) | https://api.python.langchain.com/en/latest/core_api_reference.html |
4151d5d584fc-30 | utils.function_calling.format_tool_to_openai_function(tool)
Deprecated since version 0.1.16: Use langchain_core.utils.function_calling.convert_to_openai_function() instead.
utils.function_calling.format_tool_to_openai_tool(tool)
Deprecated since version 0.1.16: Use langchain_core.utils.function_calling.convert_to_openai_tool() instead.
utils.loading.try_load_from_hub(*args, **kwargs)
Deprecated since version 0.1.30: Using the hwchase17/langchain-hub repo for prompts is deprecated. Please use <https://smith.langchain.com/hub> instead.
langchain_core.vectorstores¶
Classes¶
vectorstores.base.VectorStore()
Interface for vector store.
vectorstores.base.VectorStoreRetriever
Base Retriever class for VectorStore.
vectorstores.in_memory.InMemoryVectorStore(...)
In-memory vector store implementation.
Functions¶
vectorstores.utils.maximal_marginal_relevance(...)
Calculate maximal marginal relevance. | https://api.python.langchain.com/en/latest/core_api_reference.html |
046745b3ad40-0 | langchain_community 0.2.16¶
langchain_community.adapters¶
Adapters are used to adapt LangChain models to other APIs.
LangChain integrates with many model providers.
While LangChain has its own message and model APIs,
LangChain has also made it as easy as
possible to explore other models by exposing an adapter to adapt LangChain
models to the other APIs, as to the OpenAI API.
Classes¶
adapters.openai.Chat()
Chat.
adapters.openai.ChatCompletion()
Chat completion.
adapters.openai.ChatCompletionChunk
Chat completion chunk.
adapters.openai.ChatCompletions
Chat completions.
adapters.openai.Choice
Choice.
adapters.openai.ChoiceChunk
Choice chunk.
adapters.openai.Completions()
Completions.
adapters.openai.IndexableBaseModel
Allows a BaseModel to return its fields by string variable indexing.
Functions¶
adapters.openai.aenumerate(iterable[, start])
Async version of enumerate function.
adapters.openai.convert_dict_to_message(_dict)
Convert a dictionary to a LangChain message.
adapters.openai.convert_message_to_dict(message)
Convert a LangChain message to a dictionary.
adapters.openai.convert_messages_for_finetuning(...)
Convert messages to a list of lists of dictionaries for fine-tuning.
adapters.openai.convert_openai_messages(messages)
Convert dictionaries representing OpenAI messages to LangChain format.
langchain_community.agent_toolkits¶
Toolkits are sets of tools that can be used to interact with
various services and APIs.
Classes¶
agent_toolkits.ainetwork.toolkit.AINetworkToolkit
Toolkit for interacting with AINetwork Blockchain.
agent_toolkits.amadeus.toolkit.AmadeusToolkit
Toolkit for interacting with Amadeus which offers APIs for travel. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-1 | Toolkit for interacting with Amadeus which offers APIs for travel.
agent_toolkits.azure_ai_services.AzureAiServicesToolkit
Toolkit for Azure AI Services.
agent_toolkits.azure_cognitive_services.AzureCognitiveServicesToolkit
Toolkit for Azure Cognitive Services.
agent_toolkits.cassandra_database.toolkit.CassandraDatabaseToolkit
Toolkit for interacting with an Apache Cassandra database.
agent_toolkits.clickup.toolkit.ClickupToolkit
Clickup Toolkit.
agent_toolkits.cogniswitch.toolkit.CogniswitchToolkit
Toolkit for CogniSwitch.
agent_toolkits.connery.toolkit.ConneryToolkit
Toolkit with a list of Connery Actions as tools.
agent_toolkits.file_management.toolkit.FileManagementToolkit
Toolkit for interacting with local files.
agent_toolkits.financial_datasets.toolkit.FinancialDatasetsToolkit
Toolkit for interacting with financialdatasets.ai.
agent_toolkits.github.toolkit.BranchName
Schema for operations that require a branch name as input.
agent_toolkits.github.toolkit.CommentOnIssue
Schema for operations that require a comment as input.
agent_toolkits.github.toolkit.CreateFile
Schema for operations that require a file path and content as input.
agent_toolkits.github.toolkit.CreatePR
Schema for operations that require a PR title and body as input.
agent_toolkits.github.toolkit.CreateReviewRequest
Schema for operations that require a username as input.
agent_toolkits.github.toolkit.DeleteFile
Schema for operations that require a file path as input.
agent_toolkits.github.toolkit.DirectoryPath
Schema for operations that require a directory path as input.
agent_toolkits.github.toolkit.GetIssue
Schema for operations that require an issue number as input.
agent_toolkits.github.toolkit.GetPR
Schema for operations that require a PR number as input.
agent_toolkits.github.toolkit.GitHubToolkit
GitHub Toolkit.
agent_toolkits.github.toolkit.NoInput | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-2 | GitHub Toolkit.
agent_toolkits.github.toolkit.NoInput
Schema for operations that do not require any input.
agent_toolkits.github.toolkit.ReadFile
Schema for operations that require a file path as input.
agent_toolkits.github.toolkit.SearchCode
Schema for operations that require a search query as input.
agent_toolkits.github.toolkit.SearchIssuesAndPRs
Schema for operations that require a search query as input.
agent_toolkits.github.toolkit.UpdateFile
Schema for operations that require a file path and content as input.
agent_toolkits.gitlab.toolkit.GitLabToolkit
GitLab Toolkit.
agent_toolkits.gmail.toolkit.GmailToolkit
Toolkit for interacting with Gmail.
agent_toolkits.jira.toolkit.JiraToolkit
Jira Toolkit.
agent_toolkits.json.toolkit.JsonToolkit
Toolkit for interacting with a JSON spec.
agent_toolkits.multion.toolkit.MultionToolkit
Toolkit for interacting with the Browser Agent.
agent_toolkits.nasa.toolkit.NasaToolkit
Nasa Toolkit.
agent_toolkits.nla.tool.NLATool
Natural Language API Tool.
agent_toolkits.nla.toolkit.NLAToolkit
Natural Language API Toolkit.
agent_toolkits.office365.toolkit.O365Toolkit
Toolkit for interacting with Office 365.
agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing
Tool that sends a DELETE request and parses the response.
agent_toolkits.openapi.planner.RequestsGetToolWithParsing
Requests GET tool with LLM-instructed extraction of truncated responses.
agent_toolkits.openapi.planner.RequestsPatchToolWithParsing
Requests PATCH tool with LLM-instructed extraction of truncated responses.
agent_toolkits.openapi.planner.RequestsPostToolWithParsing
Requests POST tool with LLM-instructed extraction of truncated responses.
agent_toolkits.openapi.planner.RequestsPutToolWithParsing | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-3 | agent_toolkits.openapi.planner.RequestsPutToolWithParsing
Requests PUT tool with LLM-instructed extraction of truncated responses.
agent_toolkits.openapi.spec.ReducedOpenAPISpec(...)
A reduced OpenAPI spec.
agent_toolkits.openapi.toolkit.OpenAPIToolkit
Toolkit for interacting with an OpenAPI API.
agent_toolkits.openapi.toolkit.RequestsToolkit
Toolkit for making REST requests.
agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit
Toolkit for PlayWright browser tools.
agent_toolkits.polygon.toolkit.PolygonToolkit
Polygon Toolkit.
agent_toolkits.powerbi.toolkit.PowerBIToolkit
Toolkit for interacting with Power BI dataset.
agent_toolkits.slack.toolkit.SlackToolkit
Toolkit for interacting with Slack.
agent_toolkits.spark_sql.toolkit.SparkSQLToolkit
Toolkit for interacting with Spark SQL.
agent_toolkits.sql.toolkit.SQLDatabaseToolkit
SQLDatabaseToolkit for interacting with SQL databases.
agent_toolkits.steam.toolkit.SteamToolkit
Steam Toolkit.
agent_toolkits.zapier.toolkit.ZapierToolkit
Zapier Toolkit.
Functions¶
agent_toolkits.json.base.create_json_agent(...)
Construct a json agent from an LLM and tools.
agent_toolkits.load_tools.get_all_tool_names()
Get a list of all possible tool names.
agent_toolkits.load_tools.load_huggingface_tool(...)
Loads a tool from the HuggingFace Hub.
agent_toolkits.load_tools.load_tools(tool_names)
Load tools based on their name.
agent_toolkits.load_tools.raise_dangerous_tools_exception(name)
agent_toolkits.openapi.base.create_openapi_agent(...)
Construct an OpenAPI agent from an LLM and tools.
agent_toolkits.openapi.planner.create_openapi_agent(...)
Construct an OpenAI API planner and controller for a given spec. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-4 | Construct an OpenAI API planner and controller for a given spec.
agent_toolkits.openapi.spec.reduce_openapi_spec(spec)
Simplify/distill/minify a spec somehow.
agent_toolkits.powerbi.base.create_pbi_agent(llm)
Construct a Power BI agent from an LLM and tools.
agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm)
Construct a Power BI agent from a Chat LLM and tools.
agent_toolkits.spark_sql.base.create_spark_sql_agent(...)
Construct a Spark SQL agent from an LLM and tools.
agent_toolkits.sql.base.create_sql_agent(llm)
Construct a SQL agent from an LLM and toolkit or database.
langchain_community.agents¶
Classes¶
agents.openai_assistant.base.OpenAIAssistantV2Runnable
langchain_community.cache¶
Warning
Beta Feature!
Cache provides an optional caching layer for LLMs.
Cache is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM
provider if you’re often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make
to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
Class hierarchy:
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
Classes¶
cache.AsyncRedisCache(redis_, *[, ttl])
Cache that uses Redis as a backend.
cache.AzureCosmosDBSemanticCache(...[, ...])
Cache that uses Cosmos DB Mongo vCore vector-store backend
cache.CassandraCache([session, keyspace, ...])
Cache that uses Cassandra / Astra DB as a backend.
cache.CassandraSemanticCache([session, ...]) | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-5 | cache.CassandraSemanticCache([session, ...])
Cache that uses Cassandra as a vector-store backend for semantic (i.e.
cache.FullLLMCache(**kwargs)
SQLite table for full LLM Cache (all generations).
cache.FullMd5LLMCache(**kwargs)
SQLite table for full LLM Cache (all generations).
cache.GPTCache([init_func])
Cache that uses GPTCache as a backend.
cache.InMemoryCache()
Cache that stores things in memory.
cache.MomentoCache(cache_client, cache_name, *)
Cache that uses Momento as a backend.
cache.OpenSearchSemanticCache(...[, ...])
Cache that uses OpenSearch vector store backend
cache.RedisCache(redis_, *[, ttl])
Cache that uses Redis as a backend.
cache.RedisSemanticCache(redis_url, embedding)
Cache that uses Redis as a vector-store backend.
cache.SQLAlchemyCache(engine, cache_schema)
Cache that uses SQAlchemy as a backend.
cache.SQLAlchemyMd5Cache(engine, cache_schema)
Cache that uses SQAlchemy as a backend.
cache.SQLiteCache([database_path])
Cache that uses SQLite as a backend.
cache.SingleStoreDBSemanticCache(embedding, *)
Cache that uses SingleStore DB as a backend
cache.UpstashRedisCache(redis_, *[, ttl])
Cache that uses Upstash Redis as a backend.
Deprecated classes¶
cache.AstraDBCache(*[, collection_name, ...])
Deprecated since version 0.0.28: Use langchain_astradb.AstraDBCache instead.
cache.AstraDBSemanticCache(*[, ...])
Deprecated since version 0.0.28: Use langchain_astradb.AstraDBSemanticCache instead.
langchain_community.callbacks¶
Callback handlers allow listening to events in LangChain. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-6 | langchain_community.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
callbacks.aim_callback.AimCallbackHandler([...])
Callback Handler that logs to Aim.
callbacks.aim_callback.BaseMetadataCallbackHandler()
Callback handler for the metadata and associated function states for callbacks.
callbacks.argilla_callback.ArgillaCallbackHandler(...)
Callback Handler that logs into Argilla.
callbacks.arize_callback.ArizeCallbackHandler([...])
Callback Handler that logs to Arize.
callbacks.arthur_callback.ArthurCallbackHandler(...)
Callback Handler that logs to Arthur platform.
callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler()
Callback Handler that tracks bedrock anthropic info.
callbacks.clearml_callback.ClearMLCallbackHandler([...])
Callback Handler that logs to ClearML.
callbacks.comet_ml_callback.CometCallbackHandler([...])
Callback Handler that logs to Comet.
callbacks.confident_callback.DeepEvalCallbackHandler(metrics)
Callback Handler that logs into deepeval.
callbacks.context_callback.ContextCallbackHandler([...])
Callback Handler that records transcripts to the Context service.
callbacks.fiddler_callback.FiddlerCallbackHandler(...)
Initialize Fiddler callback handler.
callbacks.flyte_callback.FlyteCallbackHandler()
Callback handler that is used within a Flyte task.
callbacks.human.AsyncHumanApprovalCallbackHandler(...)
Asynchronous callback for manually validating values.
callbacks.human.HumanApprovalCallbackHandler(...)
Callback for manually validating values.
callbacks.human.HumanRejectedException
Exception to raise when a person manually review and rejects a value.
callbacks.infino_callback.InfinoCallbackHandler([...])
Callback Handler that logs to Infino.
callbacks.labelstudio_callback.LabelStudioCallbackHandler([...])
Label Studio callback handler.
callbacks.labelstudio_callback.LabelStudioMode(value) | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-7 | Label Studio callback handler.
callbacks.labelstudio_callback.LabelStudioMode(value)
Label Studio mode enumerator.
callbacks.llmonitor_callback.LLMonitorCallbackHandler([...])
Callback Handler for LLMonitor`.
callbacks.llmonitor_callback.UserContextManager(user_id)
Context manager for LLMonitor user context.
callbacks.mlflow_callback.MlflowCallbackHandler([...])
Callback Handler that logs metrics and artifacts to mlflow server.
callbacks.mlflow_callback.MlflowLogger(**kwargs)
Callback Handler that logs metrics and artifacts to mlflow server.
callbacks.openai_info.OpenAICallbackHandler()
Callback Handler that tracks OpenAI info.
callbacks.promptlayer_callback.PromptLayerCallbackHandler([...])
Callback handler for promptlayer.
callbacks.sagemaker_callback.SageMakerCallbackHandler(run)
Callback Handler that logs prompt artifacts and metrics to SageMaker Experiments.
callbacks.streamlit.mutable_expander.ChildRecord(...)
Child record as a NamedTuple.
callbacks.streamlit.mutable_expander.ChildType(value)
Enumerator of the child type.
callbacks.streamlit.mutable_expander.MutableExpander(...)
Streamlit expander that can be renamed and dynamically expanded/collapsed.
callbacks.streamlit.streamlit_callback_handler.LLMThought(...)
A thought in the LLM's thought stream.
callbacks.streamlit.streamlit_callback_handler.LLMThoughtLabeler()
Generates markdown labels for LLMThought containers.
callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value)
Enumerator of the LLMThought state.
callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(...)
Callback handler that writes to a Streamlit app.
callbacks.streamlit.streamlit_callback_handler.ToolRecord(...)
Tool record as a NamedTuple.
callbacks.tracers.comet.CometTracer(**kwargs)
Comet Tracer.
callbacks.tracers.wandb.WandbRunArgs
Arguments for the WandbTracer. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-8 | Arguments for the WandbTracer.
callbacks.tracers.wandb.WandbTracer(...)
Callback Handler that logs to Weights and Biases.
callbacks.trubrics_callback.TrubricsCallbackHandler([...])
Callback handler for Trubrics.
callbacks.upstash_ratelimit_callback.UpstashRatelimitError(...)
Upstash Ratelimit Error
callbacks.upstash_ratelimit_callback.UpstashRatelimitHandler(...)
Callback to handle rate limiting based on the number of requests or the number of tokens in the input.
callbacks.uptrain_callback.UpTrainCallbackHandler(*)
Callback Handler that logs evaluation results to uptrain and the console.
callbacks.uptrain_callback.UpTrainDataSchema(...)
The UpTrain data schema for tracking evaluation results.
callbacks.utils.BaseMetadataCallbackHandler()
Handle the metadata and associated function states for callbacks.
callbacks.wandb_callback.WandbCallbackHandler([...])
Callback Handler that logs to Weights and Biases.
callbacks.whylabs_callback.WhyLabsCallbackHandler(...)
Callback Handler for logging to WhyLabs.
Functions¶
callbacks.aim_callback.import_aim()
Import the aim python package and raise an error if it is not installed.
callbacks.clearml_callback.import_clearml()
Import the clearml python package and raise an error if it is not installed.
callbacks.comet_ml_callback.import_comet_ml()
Import comet_ml and raise an error if it is not installed.
callbacks.context_callback.import_context()
Import the getcontext package.
callbacks.fiddler_callback.import_fiddler()
Import the fiddler python package and raise an error if it is not installed.
callbacks.flyte_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.flyte_callback.import_flytekit()
Import flytekit and flytekitplugins-deck-standard. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-9 | Import flytekit and flytekitplugins-deck-standard.
callbacks.infino_callback.get_num_tokens(...)
Calculate num tokens for OpenAI with tiktoken package.
callbacks.infino_callback.import_infino()
Import the infino client.
callbacks.infino_callback.import_tiktoken()
Import tiktoken for counting tokens for OpenAI models.
callbacks.labelstudio_callback.get_default_label_configs(mode)
Get default Label Studio configs for the given mode.
callbacks.llmonitor_callback.identify(user_id)
Builds an LLMonitor UserContextManager
callbacks.manager.get_bedrock_anthropic_callback()
Get the Bedrock anthropic callback handler in a context manager.
callbacks.manager.get_openai_callback()
Get the OpenAI callback handler in a context manager.
callbacks.manager.wandb_tracing_enabled([...])
Get the WandbTracer in a context manager.
callbacks.mlflow_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.mlflow_callback.construct_html_from_prompt_and_generation(...)
Construct an html element from a prompt and a generation.
callbacks.mlflow_callback.get_text_complexity_metrics()
Get the text complexity metrics from textstat.
callbacks.mlflow_callback.import_mlflow()
Import the mlflow python package and raise an error if it is not installed.
callbacks.mlflow_callback.mlflow_callback_metrics()
Get the metrics to log to MLFlow.
callbacks.openai_info.get_openai_token_cost_for_model(...)
Get the cost in USD for a given model and number of tokens.
callbacks.openai_info.standardize_model_name(...)
Standardize the model name to a format that can be used in the OpenAI API.
callbacks.sagemaker_callback.save_json(data, ...)
Save dict to local file path.
callbacks.tracers.comet.import_comet_llm_api() | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-10 | callbacks.tracers.comet.import_comet_llm_api()
Import comet_llm api and raise an error if it is not installed.
callbacks.tracers.wandb.build_tree(runs)
Builds a nested dictionary from a list of runs. :param runs: The list of runs to build the tree from. :return: The nested dictionary representing the langchain Run in a tree structure compatible with WBTraceTree.
callbacks.tracers.wandb.flatten_run(run)
Utility to flatten a nest run object into a list of runs.
callbacks.tracers.wandb.modify_serialized_iterative(runs)
Utility to modify the serialized field of a list of runs dictionaries. removes any keys that match the exact_keys and any keys that contain any of the partial_keys. recursively moves the dictionaries under the kwargs key to the top level. changes the "id" field to a string "_kind" field that tells WBTraceTree how to visualize the run. promotes the "serialized" field to the top level. :param runs: The list of runs to modify. :param exact_keys: A tuple of keys to remove from the serialized field. :param partial_keys: A tuple of partial keys to remove from the serialized field. :return: The modified list of runs.
callbacks.tracers.wandb.truncate_run_iterative(runs)
Utility to truncate a list of runs dictionaries to only keep the specified
callbacks.uptrain_callback.import_uptrain()
Import the uptrain package.
callbacks.utils.flatten_dict(nested_dict[, ...])
Flatten a nested dictionary into a flat dictionary.
callbacks.utils.hash_string(s)
Hash a string using sha1.
callbacks.utils.import_pandas()
Import the pandas python package and raise an error if it is not installed.
callbacks.utils.import_spacy()
Import the spacy python package and raise an error if it is not installed. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-11 | Import the spacy python package and raise an error if it is not installed.
callbacks.utils.import_textstat()
Import the textstat python package and raise an error if it is not installed.
callbacks.utils.load_json(json_path)
Load json file to a string.
callbacks.wandb_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.wandb_callback.construct_html_from_prompt_and_generation(...)
Construct an html element from a prompt and a generation.
callbacks.wandb_callback.import_wandb()
Import the wandb python package and raise an error if it is not installed.
callbacks.wandb_callback.load_json_to_dict(...)
Load json file to a dictionary.
callbacks.whylabs_callback.import_langkit([...])
Import the langkit python package and raise an error if it is not installed.
langchain_community.chains¶
Chains module for langchain_community
This module contains the community chains.
Classes¶
chains.graph_qa.arangodb.ArangoGraphQAChain
Chain for question-answering against a graph by generating AQL statements.
chains.graph_qa.base.GraphQAChain
Chain for question-answering against a graph.
chains.graph_qa.cypher.GraphCypherQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.cypher_utils.CypherQueryCorrector(schemas)
Used to correct relationship direction in generated Cypher statements.
chains.graph_qa.cypher_utils.Schema(...)
Create new instance of Schema(left_node, relation, right_node)
chains.graph_qa.falkordb.FalkorDBQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.gremlin.GremlinQAChain
Chain for question-answering against a graph by generating gremlin statements. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-12 | Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.hugegraph.HugeGraphQAChain
Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.kuzu.KuzuQAChain
Question-answering against a graph by generating Cypher statements for Kùzu.
chains.graph_qa.nebulagraph.NebulaGraphQAChain
Chain for question-answering against a graph by generating nGQL statements.
chains.graph_qa.neptune_cypher.NeptuneOpenCypherQAChain
Chain for question-answering against a Neptune graph by generating openCypher statements.
chains.graph_qa.neptune_sparql.NeptuneSparqlQAChain
Chain for question-answering against a Neptune graph by generating SPARQL statements.
chains.graph_qa.ontotext_graphdb.OntotextGraphDBQAChain
Question-answering against Ontotext GraphDB
chains.graph_qa.sparql.GraphSparqlQAChain
Question-answering against an RDF or OWL graph by generating SPARQL statements.
chains.llm_requests.LLMRequestsChain
Chain that requests a URL and then uses an LLM to parse results.
chains.openapi.chain.OpenAPIEndpointChain
Chain interacts with an OpenAPI endpoint using natural language.
chains.openapi.requests_chain.APIRequesterChain
Get the request parser.
chains.openapi.requests_chain.APIRequesterOutputParser
Parse the request and error tags.
chains.openapi.response_chain.APIResponderChain
Get the response parser.
chains.openapi.response_chain.APIResponderOutputParser
Parse the response and error tags.
chains.pebblo_retrieval.base.PebbloRetrievalQA
Retrieval Chain with Identity & Semantic Enforcement for question-answering against a vector database. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-13 | chains.pebblo_retrieval.models.App
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.AuthContext
Class for an authorization context.
chains.pebblo_retrieval.models.ChainInfo
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.ChainInput
Input for PebbloRetrievalQA chain.
chains.pebblo_retrieval.models.Context
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Framework
Langchain framework details
chains.pebblo_retrieval.models.Model
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.PkgInfo
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Prompt
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Qa
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Runtime
OS, language details
chains.pebblo_retrieval.models.SemanticContext
Class for a semantic context.
chains.pebblo_retrieval.models.SemanticEntities
Class for a semantic entity filter.
chains.pebblo_retrieval.models.SemanticTopics
Class for a semantic topic filter.
chains.pebblo_retrieval.models.VectorDB
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.utilities.PebbloRetrievalAPIWrapper
Wrapper for Pebblo Retrieval API.
chains.pebblo_retrieval.utilities.Routes(value)
Routes available for the Pebblo API as enumerator.
Functions¶ | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-14 | Routes available for the Pebblo API as enumerator.
Functions¶
chains.ernie_functions.base.convert_python_function_to_ernie_function(...)
Convert a Python function to an Ernie function-calling API compatible dict.
chains.ernie_functions.base.convert_to_ernie_function(...)
Convert a raw function/class to an Ernie function.
chains.ernie_functions.base.create_ernie_fn_chain(...)
[Legacy] Create an LLM chain that uses Ernie functions.
chains.ernie_functions.base.create_ernie_fn_runnable(...)
Create a runnable sequence that uses Ernie functions.
chains.ernie_functions.base.create_structured_output_chain(...)
[Legacy] Create an LLMChain that uses an Ernie function to get a structured output.
chains.ernie_functions.base.create_structured_output_runnable(...)
Create a runnable that uses an Ernie function to get a structured output.
chains.ernie_functions.base.get_ernie_output_parser(...)
Get the appropriate function output parser given the user functions.
chains.graph_qa.cypher.construct_schema(...)
Filter the schema based on included or excluded types
chains.graph_qa.cypher.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.cypher.get_function_response(...)
chains.graph_qa.falkordb.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.gremlin.extract_gremlin(text)
Extract Gremlin code from a text.
chains.graph_qa.kuzu.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.kuzu.remove_prefix(text, prefix)
Remove a prefix from a text.
chains.graph_qa.neptune_cypher.extract_cypher(text)
Extract Cypher code from text using Regex. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-15 | Extract Cypher code from text using Regex.
chains.graph_qa.neptune_cypher.trim_query(query)
Trim the query to only include Cypher keywords.
chains.graph_qa.neptune_cypher.use_simple_prompt(llm)
Decides whether to use the simple prompt
chains.graph_qa.neptune_sparql.extract_sparql(query)
Extract SPARQL code from a text.
chains.pebblo_retrieval.enforcement_filters.clear_enforcement_filters(...)
Clear the identity and semantic enforcement filters in the retriever search_kwargs.
chains.pebblo_retrieval.enforcement_filters.set_enforcement_filters(...)
Set identity and semantic enforcement filters in the retriever.
chains.pebblo_retrieval.utilities.get_ip()
Fetch local runtime ip address.
chains.pebblo_retrieval.utilities.get_runtime()
Fetch the current Framework and Runtime details.
langchain_community.chat_loaders¶
Chat Loaders load chat messages from common communications platforms.
Load chat messages from various
communications platforms such as Facebook Messenger, Telegram, and
WhatsApp. The loaded chat messages can be used for fine-tuning models.
Class hierarchy:
BaseChatLoader --> <name>ChatLoader # Examples: WhatsAppChatLoader, IMessageChatLoader
Main helpers:
ChatSession
Classes¶
chat_loaders.facebook_messenger.FolderFacebookMessengerChatLoader(path)
Load Facebook Messenger chat data from a folder.
chat_loaders.facebook_messenger.SingleFileFacebookMessengerChatLoader(path)
Load Facebook Messenger chat data from a single file.
chat_loaders.imessage.IMessageChatLoader([path])
Load chat sessions from the iMessage chat.db SQLite file.
chat_loaders.langsmith.LangSmithDatasetChatLoader(*, ...)
Load chat sessions from a LangSmith dataset with the "chat" data type.
chat_loaders.langsmith.LangSmithRunChatLoader(runs) | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-16 | chat_loaders.langsmith.LangSmithRunChatLoader(runs)
Load chat sessions from a list of LangSmith "llm" runs.
chat_loaders.slack.SlackChatLoader(path)
Load Slack conversations from a dump zip file.
chat_loaders.telegram.TelegramChatLoader(path)
Load telegram conversations to LangChain chat messages.
chat_loaders.whatsapp.WhatsAppChatLoader(path)
Load WhatsApp conversations from a dump zip file or directory.
Functions¶
chat_loaders.imessage.nanoseconds_from_2001_to_datetime(...)
Convert nanoseconds since 2001 to a datetime object.
chat_loaders.utils.map_ai_messages(...)
Convert messages from the specified 'sender' to AI messages.
chat_loaders.utils.map_ai_messages_in_session(...)
Convert messages from the specified 'sender' to AI messages.
chat_loaders.utils.merge_chat_runs(chat_sessions)
Merge chat runs together.
chat_loaders.utils.merge_chat_runs_in_session(...)
Merge chat runs together in a chat session.
Deprecated classes¶
chat_loaders.gmail.GMailLoader(creds[, n, ...])
Deprecated since version 0.0.32: Use langchain_google_community.GMailLoader instead.
langchain_community.chat_message_histories¶
Chat message history stores a history of the message interactions in a chat.
Class hierarchy:
BaseChatMessageHistory --> <name>ChatMessageHistory # Examples: FileChatMessageHistory, PostgresChatMessageHistory
Main helpers:
AIMessage, HumanMessage, BaseMessage
Classes¶
chat_message_histories.cassandra.CassandraChatMessageHistory(...)
Chat message history that is backed by Cassandra.
chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(...)
Chat message history backed by Azure CosmosDB.
chat_message_histories.dynamodb.DynamoDBChatMessageHistory(...)
Chat message history that stores history in AWS DynamoDB. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-17 | Chat message history that stores history in AWS DynamoDB.
chat_message_histories.file.FileChatMessageHistory(...)
Chat message history that stores history in a local file.
chat_message_histories.firestore.FirestoreChatMessageHistory(...)
Chat message history backed by Google Firestore.
chat_message_histories.kafka.ConsumeStartPosition(value)
Consume start position for Kafka consumer to get chat history messages.
chat_message_histories.kafka.KafkaChatMessageHistory(...)
Chat message history stored in Kafka.
chat_message_histories.momento.MomentoChatMessageHistory(...)
Chat message history cache that uses Momento as a backend.
chat_message_histories.neo4j.Neo4jChatMessageHistory(...)
Chat message history stored in a Neo4j database.
chat_message_histories.redis.RedisChatMessageHistory(...)
Chat message history stored in a Redis database.
chat_message_histories.rocksetdb.RocksetChatMessageHistory(...)
Uses Rockset to store chat messages.
chat_message_histories.singlestoredb.SingleStoreDBChatMessageHistory(...)
Chat message history stored in a SingleStoreDB database.
chat_message_histories.sql.BaseMessageConverter()
Convert BaseMessage to the SQLAlchemy model.
chat_message_histories.sql.DefaultMessageConverter(...)
The default message converter for SQLChatMessageHistory.
chat_message_histories.sql.SQLChatMessageHistory(...)
Chat message history stored in an SQL database.
chat_message_histories.streamlit.StreamlitChatMessageHistory([key])
Chat message history that stores messages in Streamlit session state.
chat_message_histories.tidb.TiDBChatMessageHistory(...)
Represents a chat message history stored in a TiDB database.
chat_message_histories.upstash_redis.UpstashRedisChatMessageHistory(...)
Chat message history stored in an Upstash Redis database.
chat_message_histories.xata.XataChatMessageHistory(...)
Chat message history stored in a Xata database. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-18 | Chat message history stored in a Xata database.
chat_message_histories.zep.SearchScope(value)
Scope for the document search.
chat_message_histories.zep.SearchType(value)
Enumerator of the types of search to perform.
chat_message_histories.zep.ZepChatMessageHistory(...)
Chat message history that uses Zep as a backend.
chat_message_histories.zep_cloud.ZepCloudChatMessageHistory(...)
Chat message history that uses Zep Cloud as a backend.
Functions¶
chat_message_histories.kafka.ensure_topic_exists(...)
Create topic if it doesn't exist, and return the number of partitions.
chat_message_histories.sql.create_message_model(...)
Create a message model for a given table name.
chat_message_histories.zep_cloud.condense_zep_memory_into_human_message(...)
Condense Zep memory into a human message.
chat_message_histories.zep_cloud.get_zep_message_role_type(role)
Get the Zep role type from the role string.
Deprecated classes¶
chat_message_histories.astradb.AstraDBChatMessageHistory(*, ...)
Deprecated since version 0.0.25: Use langchain_astradb.AstraDBChatMessageHistory instead.
chat_message_histories.elasticsearch.ElasticsearchChatMessageHistory(...)
Deprecated since version 0.0.27: Use Use langchain-elasticsearch package instead.
chat_message_histories.mongodb.MongoDBChatMessageHistory(...)
Deprecated since version 0.0.25: Use langchain_mongodb.MongoDBChatMessageHistory instead.
chat_message_histories.postgres.PostgresChatMessageHistory(...) | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-19 | chat_message_histories.postgres.PostgresChatMessageHistory(...)
Deprecated since version 0.0.31: This class is deprecated and will be removed in a future version. You can swap to using the PostgresChatMessageHistory implementation in langchain_postgres. Please do not submit further PRs to this class.See <https://github.com/langchain-ai/langchain-postgres> Use from langchain_postgres import PostgresChatMessageHistory; instead.
langchain_community.chat_models¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose
is a bit different. Rather than expose a “text in, text out” API, they expose
an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
chat_models.anyscale.ChatAnyscale
Anyscale Chat large language models.
chat_models.azureml_endpoint.AzureMLChatOnlineEndpoint
Azure ML Online Endpoint chat models.
chat_models.azureml_endpoint.CustomOpenAIChatContentFormatter()
Chat Content formatter for models with OpenAI like API scheme.
chat_models.azureml_endpoint.LlamaChatContentFormatter()
Deprecated: Kept for backwards compatibility
chat_models.azureml_endpoint.LlamaContentFormatter()
Content formatter for LLaMA.
chat_models.azureml_endpoint.MistralChatContentFormatter()
Content formatter for Mistral.
chat_models.baichuan.ChatBaichuan
Baichuan chat model integration.
chat_models.baidu_qianfan_endpoint.QianfanChatEndpoint
Baidu Qianfan chat model integration.
chat_models.bedrock.ChatPromptAdapter()
Adapter class to prepare the inputs from Langchain to prompt format that Chat model expects. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-20 | Adapter class to prepare the inputs from Langchain to prompt format that Chat model expects.
chat_models.coze.ChatCoze
ChatCoze chat models API by coze.com
chat_models.dappier.ChatDappierAI
Dappier chat large language models.
chat_models.databricks.ChatDatabricks
Databricks chat models API.
chat_models.deepinfra.ChatDeepInfra
A chat model that uses the DeepInfra API.
chat_models.deepinfra.ChatDeepInfraException
Exception raised when the DeepInfra API returns an error.
chat_models.edenai.ChatEdenAI
EdenAI chat large language models.
chat_models.everlyai.ChatEverlyAI
EverlyAI Chat large language models.
chat_models.fake.FakeListChatModel
Fake ChatModel for testing purposes.
chat_models.fake.FakeMessagesListChatModel
Fake ChatModel for testing purposes.
chat_models.friendli.ChatFriendli
Friendli LLM for chat.
chat_models.gigachat.GigaChat
GigaChat large language models API.
chat_models.google_palm.ChatGooglePalm
Google PaLM Chat models API.
chat_models.google_palm.ChatGooglePalmError
Error with the Google PaLM API.
chat_models.gpt_router.GPTRouter
GPTRouter by Writesonic Inc.
chat_models.gpt_router.GPTRouterException
Error with the GPTRouter APIs
chat_models.gpt_router.GPTRouterModel
GPTRouter model.
chat_models.human.HumanInputChatModel
ChatModel which returns user input as the response.
chat_models.hunyuan.ChatHunyuan
Tencent Hunyuan chat models API by Tencent.
chat_models.javelin_ai_gateway.ChatJavelinAIGateway
Javelin AI Gateway chat models API.
chat_models.javelin_ai_gateway.ChatParams | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-21 | Javelin AI Gateway chat models API.
chat_models.javelin_ai_gateway.ChatParams
Parameters for the Javelin AI Gateway LLM.
chat_models.jinachat.JinaChat
Jina AI Chat models API.
chat_models.kinetica.ChatKinetica
Kinetica LLM Chat Model API.
chat_models.kinetica.KineticaSqlOutputParser
Fetch and return data from the Kinetica LLM.
chat_models.kinetica.KineticaSqlResponse
Response containing SQL and the fetched data.
chat_models.kinetica.KineticaUtil()
Kinetica utility functions.
chat_models.konko.ChatKonko
ChatKonko Chat large language models API.
chat_models.litellm.ChatLiteLLM
Chat model that uses the LiteLLM API.
chat_models.litellm.ChatLiteLLMException
Error with the LiteLLM I/O library
chat_models.litellm_router.ChatLiteLLMRouter
LiteLLM Router as LangChain Model.
chat_models.llama_edge.LlamaEdgeChatService
Chat with LLMs via llama-api-server
chat_models.llamacpp.ChatLlamaCpp
llama.cpp model.
chat_models.maritalk.ChatMaritalk
MariTalk Chat models API.
chat_models.maritalk.MaritalkHTTPError(...)
Initialize RequestException with request and response objects.
chat_models.minimax.MiniMaxChat
MiniMax chat model integration.
chat_models.mlflow.ChatMlflow
MLflow chat models API.
chat_models.mlflow_ai_gateway.ChatMLflowAIGateway
MLflow AI Gateway chat models API.
chat_models.mlflow_ai_gateway.ChatParams
Parameters for the MLflow AI Gateway LLM.
chat_models.mlx.ChatMLX
MLX chat models.
chat_models.moonshot.MoonshotChat
Moonshot large language models. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-22 | chat_models.moonshot.MoonshotChat
Moonshot large language models.
chat_models.oci_generative_ai.ChatOCIGenAI
ChatOCIGenAI chat model integration.
chat_models.oci_generative_ai.CohereProvider()
chat_models.oci_generative_ai.MetaProvider()
chat_models.oci_generative_ai.Provider()
chat_models.octoai.ChatOctoAI
OctoAI Chat large language models.
chat_models.ollama.ChatOllama
Ollama locally runs large language models.
chat_models.pai_eas_endpoint.PaiEasChatEndpoint
Alibaba Cloud PAI-EAS LLM Service chat model API.
chat_models.perplexity.ChatPerplexity
Perplexity AI Chat models API.
chat_models.premai.ChatPremAI
PremAI Chat models.
chat_models.premai.ChatPremAPIError
Error with the PremAI API.
chat_models.promptlayer_openai.PromptLayerChatOpenAI
PromptLayer and OpenAI Chat large language models API.
chat_models.snowflake.ChatSnowflakeCortex
Snowflake Cortex based Chat model
chat_models.snowflake.ChatSnowflakeCortexError
Error with Snowpark client.
chat_models.sparkllm.ChatSparkLLM
IFlyTek Spark chat model integration.
chat_models.symblai_nebula.ChatNebula
Nebula chat large language model - https://docs.symbl.ai/docs/nebula-llm
chat_models.tongyi.ChatTongyi
Alibaba Tongyi Qwen chat model integration.
chat_models.volcengine_maas.VolcEngineMaasChat
Volc Engine Maas hosts a plethora of models.
chat_models.yandex.ChatYandexGPT
YandexGPT large language models.
chat_models.yi.ChatYi
Yi chat models API. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-23 | chat_models.yi.ChatYi
Yi chat models API.
chat_models.yuan2.ChatYuan2
Yuan2.0 Chat models API.
chat_models.zhipuai.ChatZhipuAI
ZhipuAI chat model integration.
Functions¶
chat_models.anthropic.convert_messages_to_prompt_anthropic(...)
Format a list of messages into a full prompt for the Anthropic model
chat_models.baichuan.aconnect_httpx_sse(...)
Async context manager for connecting to an SSE stream.
chat_models.baidu_qianfan_endpoint.convert_message_to_dict(message)
Convert a message to a dictionary that can be passed to the API.
chat_models.bedrock.convert_messages_to_prompt_mistral(...)
Convert a list of messages to a prompt for mistral.
chat_models.cohere.get_cohere_chat_request(...)
Get the request for the Cohere chat API.
chat_models.cohere.get_role(message)
Get the role of the message.
chat_models.fireworks.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.fireworks.acompletion_with_retry_streaming(...)
Use tenacity to retry the completion call for streaming.
chat_models.fireworks.completion_with_retry(...)
Use tenacity to retry the completion call.
chat_models.fireworks.conditional_decorator(...)
Define conditional decorator.
chat_models.fireworks.convert_dict_to_message(_dict)
Convert a dict response to a message.
chat_models.friendli.get_chat_request(messages)
Get a request of the Friendli chat API.
chat_models.friendli.get_role(message)
Get role of the message.
chat_models.google_palm.achat_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.google_palm.chat_with_retry(llm, ...)
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-24 | Use tenacity to retry the completion call.
chat_models.gpt_router.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.gpt_router.completion_with_retry(...)
Use tenacity to retry the completion call.
chat_models.gpt_router.get_ordered_generation_requests(...)
Return the body for the model router input.
chat_models.jinachat.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.litellm.acompletion_with_retry(llm)
Use tenacity to retry the async completion call.
chat_models.litellm_router.get_llm_output(...)
Get llm output from usage and params.
chat_models.meta.convert_messages_to_prompt_llama(...)
Convert a list of messages to a prompt for llama.
chat_models.minimax.aconnect_httpx_sse(...)
Async context manager for connecting to an SSE stream.
chat_models.minimax.connect_httpx_sse(...)
Context manager for connecting to an SSE stream.
chat_models.openai.acompletion_with_retry(llm)
Use tenacity to retry the async completion call.
chat_models.premai.chat_with_retry(llm, ...)
Using tenacity for retry in completion call
chat_models.premai.create_prem_retry_decorator(llm, *)
Create a retry decorator for PremAI API errors.
chat_models.sparkllm.convert_dict_to_message(_dict)
chat_models.sparkllm.convert_message_to_dict(message)
chat_models.tongyi.convert_dict_to_message(_dict)
Convert a dict to a message.
chat_models.tongyi.convert_message_chunk_to_message(...)
Convert a message chunk to a message.
chat_models.tongyi.convert_message_to_dict(message)
Convert a message to a dict.
chat_models.volcengine_maas.convert_dict_to_message(_dict)
Convert a dict to a message. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-25 | Convert a dict to a message.
chat_models.yandex.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.yandex.completion_with_retry(...)
Use tenacity to retry the completion call.
chat_models.yi.aconnect_httpx_sse(client, ...)
chat_models.yuan2.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.zhipuai.aconnect_sse(client, ...)
Async context manager for connecting to an SSE stream.
chat_models.zhipuai.connect_sse(client, ...)
Context manager for connecting to an SSE stream.
Deprecated classes¶
chat_models.anthropic.ChatAnthropic
Deprecated since version 0.0.28: Use langchain_anthropic.ChatAnthropic instead.
chat_models.azure_openai.AzureChatOpenAI
Deprecated since version 0.0.10: Use langchain_openai.AzureChatOpenAI instead.
chat_models.bedrock.BedrockChat
Deprecated since version 0.0.34: Use langchain_aws.ChatBedrock instead.
chat_models.cohere.ChatCohere
Deprecated since version 0.0.30: Use langchain_cohere.ChatCohere instead.
chat_models.ernie.ErnieBotChat
Deprecated since version 0.0.13: Use langchain_community.chat_models.QianfanChatEndpoint instead.
chat_models.fireworks.ChatFireworks
Deprecated since version 0.0.26: Use langchain_fireworks.ChatFireworks instead.
chat_models.huggingface.ChatHuggingFace
Deprecated since version 0.0.37: Use langchain_huggingface.ChatHuggingFace instead.
chat_models.openai.ChatOpenAI
Deprecated since version 0.0.10: Use langchain_openai.ChatOpenAI instead.
chat_models.solar.SolarChat | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-26 | chat_models.solar.SolarChat
Deprecated since version 0.0.34: Use langchain_upstage.ChatUpstage instead.
chat_models.vertexai.ChatVertexAI
Deprecated since version 0.0.12: Use langchain_google_vertexai.ChatVertexAI instead.
langchain_community.cross_encoders¶
Cross encoders are wrappers around cross encoder models from different APIs andservices.
Cross encoder models can be LLMs or not.
Class hierarchy:
BaseCrossEncoder --> <name>CrossEncoder # Examples: SagemakerEndpointCrossEncoder
Classes¶
cross_encoders.fake.FakeCrossEncoder
Fake cross encoder model.
cross_encoders.huggingface.HuggingFaceCrossEncoder
HuggingFace cross encoder models.
cross_encoders.sagemaker_endpoint.CrossEncoderContentHandler()
Content handler for CrossEncoder class.
cross_encoders.sagemaker_endpoint.SagemakerEndpointCrossEncoder
SageMaker Inference CrossEncoder endpoint.
langchain_community.docstore¶
Docstores are classes to store and load Documents.
The Docstore is a simplified version of the Document Loader.
Class hierarchy:
Docstore --> <name> # Examples: InMemoryDocstore, Wikipedia
Main helpers:
Document, AddableMixin
Classes¶
docstore.arbitrary_fn.DocstoreFn(lookup_fn)
Docstore via arbitrary lookup function.
docstore.base.AddableMixin()
Mixin class that supports adding texts.
docstore.base.Docstore()
Interface to access to place that stores documents.
docstore.in_memory.InMemoryDocstore([_dict])
Simple in memory docstore in the form of a dict.
docstore.wikipedia.Wikipedia()
Wikipedia API.
langchain_community.document_compressors¶
Classes¶
document_compressors.dashscope_rerank.DashScopeRerank
Document compressor that uses DashScope Rerank API. | https://api.python.langchain.com/en/latest/community_api_reference.html |
046745b3ad40-27 | Document compressor that uses DashScope Rerank API.
document_compressors.flashrank_rerank.FlashrankRerank
Document compressor using Flashrank interface.
document_compressors.jina_rerank.JinaRerank
Document compressor that uses Jina Rerank API.
document_compressors.llmlingua_filter.LLMLinguaCompressor
Compress using LLMLingua Project.
document_compressors.openvino_rerank.OpenVINOReranker
OpenVINO rerank models.
document_compressors.openvino_rerank.RerankRequest([...])
Request for reranking.
document_compressors.rankllm_rerank.ModelType(value)
An enumeration.
document_compressors.rankllm_rerank.RankLLMRerank
Document compressor using Flashrank interface.
document_compressors.volcengine_rerank.VolcengineRerank
Document compressor that uses Volcengine Rerank API.
langchain_community.document_loaders¶
Document Loaders are classes to load Documents.
Document Loaders are usually used to load a lot of Documents in a single run.
Class hierarchy:
BaseLoader --> <name>Loader # Examples: TextLoader, UnstructuredFileLoader
Main helpers:
Document, <name>TextSplitter
Classes¶
document_loaders.acreom.AcreomLoader(path[, ...])
Load acreom vault from a directory.
document_loaders.airbyte.AirbyteCDKLoader(...)
Load with an Airbyte source connector implemented using the CDK.
document_loaders.airbyte.AirbyteGongLoader(...)
Load from Gong using an Airbyte source connector.
document_loaders.airbyte.AirbyteHubspotLoader(...)
Load from Hubspot using an Airbyte source connector.
document_loaders.airbyte.AirbyteSalesforceLoader(...) | https://api.python.langchain.com/en/latest/community_api_reference.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.