id stringlengths 14 16 | text stringlengths 20 2.3k | source stringlengths 65 166 |
|---|---|---|
b7a758866d60-14 | chains.openai_functions.extraction.create_extraction_chain(...)
Deprecated since version 0.1.14: LangChain has introduced a method called with_structured_output thatis available on ChatModels capable of tool calling.You can read more about the method here: <https://python.langchain.com/docs/modules/model_io/chat/struct... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-15 | chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm)
Deprecated since version 0.2.13: This function is deprecated. Refer to this guide on retrieval and question answering with sources: https://python.langchain.com/v0.2/docs/how_to/qa_sources/#structure-sources-in-model-response
chains.openai_func... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-16 | chains.openai_functions.tagging.create_tagging_chain_pydantic(...)
Deprecated since version 0.2.13: LangChain has introduced a method called with_structured_output that is available on ChatModels capable of tool calling. See API reference for this function for replacement: <https://api.python.langchain.com/en/latest/ch... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-17 | Deprecated since version 0.2.13: Use load_query_constructor_runnable instead.
chains.question_answering.chain.load_qa_chain(llm)
Deprecated since version 0.2.13: This class is deprecated. See the following migration guides for replacements based on chain_type:
chains.structured_output.base.create_openai_fn_runnable(...... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-18 | an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Functions¶
chat_models.base.init_chat_model()
langchain.embeddings¶
Embedding models are wrappers aro... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-19 | Grading the accuracy of a response against ground truth answers: QAEvalChain
Comparing the output of two models: PairwiseStringEvalChain or LabeledPairwiseStringEvalChain when there is additionally a reference label.
Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain
Checking whether an output complies ... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-20 | evaluation.criteria.eval_chain.Criteria(value)
A Criteria to evaluate.
evaluation.criteria.eval_chain.CriteriaEvalChain
LLM Chain for evaluating runs against criteria.
evaluation.criteria.eval_chain.CriteriaResultOutputParser
A parser for the output of the CriteriaEvalChain.
evaluation.criteria.eval_chain.LabeledCriter... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-21 | The types of the evaluators.
evaluation.schema.LLMEvalChain
A base class for evaluators that use an LLM.
evaluation.schema.PairwiseStringEvaluator()
Compare the output of two models (or two outputs of the same model).
evaluation.schema.StringEvaluator()
Grade, tag, or otherwise evaluate predictions relative to their in... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-22 | globals.get_llm_cache()
Get the value of the llm_cache global setting.
globals.get_verbose()
Get the value of the verbose global setting.
globals.set_debug(value)
Set a new value for the debug global setting.
globals.set_llm_cache(value)
Set a new LLM cache, overwriting the previous value, if any.
globals.set_verbose(v... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-23 | Class hierarchy for ChatMessageHistory:
BaseChatMessageHistory --> <name>ChatMessageHistory # Example: ZepChatMessageHistory
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
memory.buffer.ConversationBufferMemory
Buffer for storing conversation memory.
memory.buffer.ConversationStringBufferMemory
Buffer for... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-24 | Get the prompt input key.
Deprecated classes¶
memory.summary.SummarizerMixin
Deprecated since version 0.2.12: Refer here for how to incorporate summaries of conversation history: https://langchain-ai.github.io/langgraph/how-tos/memory/add-summary-conversation-history/
langchain.model_laboratory¶
Experiment with differe... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-25 | Wrap a parser and try to fix parsing errors.
output_parsers.retry.RetryWithErrorOutputParserRetryChainInput
output_parsers.structured.ResponseSchema
Schema for a response from a structured output parser.
output_parsers.structured.StructuredOutputParser
Parse the output of an LLM call to a structured output.
output_pars... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-26 | retrievers.document_compressors.cross_encoder.BaseCrossEncoder()
Interface for cross encoder models.
retrievers.document_compressors.cross_encoder_rerank.CrossEncoderReranker
Document compressor that uses CrossEncoder for reranking.
retrievers.document_compressors.embeddings_filter.EmbeddingsFilter
Document compressor ... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-27 | retrievers.document_compressors.chain_extract.default_get_input(...)
Return the compression chain input.
retrievers.document_compressors.chain_filter.default_get_input(...)
Return the compression chain input.
retrievers.ensemble.unique_by_key(iterable, key)
Yield unique elements of an iterable based on a key function.
... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-28 | An example of this is shown below, assuming you’ve created a LangSmith dataset called <my_dataset_name>:
from langsmith import Client
from langchain_community.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, run_on_dataset
# Chains may have memory. Passing i... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-29 | def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
return {"score": prediction == reference}
evaluation_config = RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
run_on_dataset(
client,
... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
b7a758866d60-30 | smith.evaluation.string_run_evaluator.StringExampleMapper
Map an example, or row in the dataset, to the inputs of an evaluation.
smith.evaluation.string_run_evaluator.StringRunEvaluatorChain
Evaluate Run and optional examples.
smith.evaluation.string_run_evaluator.StringRunMapper
Extract items to evaluate from the run ... | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
8f24cd89408b-0 | langchain_together 0.1.6¶
langchain_together.chat_models¶
Wrapper around Together AI’s Chat Completions API.
Classes¶
chat_models.ChatTogether
ChatTogether chat model.
langchain_together.embeddings¶
Wrapper around Together AI’s Embeddings API.
Classes¶
embeddings.TogetherEmbeddings
Together embedding model integration.... | https://api.python.langchain.com/en/latest/together_api_reference.html |
cf549542f773-0 | langchain_google_genai 1.0.10¶
langchain_google_genai.chat_models¶
Classes¶
chat_models.ChatGoogleGenerativeAI
Google AI chat models integration.
chat_models.ChatGoogleGenerativeAIError
Custom exception class for errors associated with the Google GenAI API.
langchain_google_genai.embeddings¶
Classes¶
embeddings.GoogleG... | https://api.python.langchain.com/en/latest/google_genai_api_reference.html |
a9bbe44ba76d-0 | langchain_exa 0.1.0¶
langchain_exa.retrievers¶
Classes¶
retrievers.ExaSearchRetriever
Exa Search retriever.
langchain_exa.tools¶
Tool for the Exa Search API.
Classes¶
tools.ExaFindSimilarResults
Tool that queries the Metaphor Search API and gets back json.
tools.ExaSearchResults
Exa Search tool. | https://api.python.langchain.com/en/latest/exa_api_reference.html |
f88ea9927621-0 | langchain_mongodb 0.1.9¶
langchain_mongodb.cache¶
LangChain MongoDB Caches.
Classes¶
cache.MongoDBAtlasSemanticCache(...[, ...])
MongoDB Atlas Semantic cache.
cache.MongoDBCache(connection_string[, ...])
MongoDB Atlas cache
langchain_mongodb.chat_message_histories¶
Classes¶
chat_message_histories.MongoDBChatMessageHist... | https://api.python.langchain.com/en/latest/mongodb_api_reference.html |
f88ea9927621-1 | to create MongoDB’s core Vector Search Retriever.
Classes¶
retrievers.full_text_search.MongoDBAtlasFullTextSearchRetriever
Hybrid Search Retriever performs full-text searches using Lucene's standard (BM25) analyzer.
retrievers.hybrid_search.MongoDBAtlasHybridSearchRetriever
Hybrid Search Retriever combines vector and f... | https://api.python.langchain.com/en/latest/mongodb_api_reference.html |
37d7bd7c4136-0 | langchain_ai21 0.1.8¶
langchain_ai21.ai21_base¶
Classes¶
ai21_base.AI21Base
Base class for AI21 models.
langchain_ai21.chat¶
Classes¶
chat.chat_adapter.ChatAdapter()
Common interface for the different Chat models available in AI21.
chat.chat_adapter.J2ChatAdapter()
Adapter for J2Chat models.
chat.chat_adapter.JambaChat... | https://api.python.langchain.com/en/latest/ai21_api_reference.html |
8251bcb86eac-0 | langchain_pinecone 0.1.3¶
langchain_pinecone.embeddings¶
Classes¶
embeddings.PineconeEmbeddings
PineconeEmbeddings embedding model.
langchain_pinecone.vectorstores¶
Classes¶
vectorstores.PineconeVectorStore([index, ...])
Pinecone vector store integration.
Deprecated classes¶
vectorstores.Pinecone([index, embedding, ...... | https://api.python.langchain.com/en/latest/pinecone_api_reference.html |
ea3c1f0c8d8b-0 | langchain_voyageai 0.1.1¶
langchain_voyageai.embeddings¶
Classes¶
embeddings.VoyageAIEmbeddings
VoyageAIEmbeddings embedding model.
langchain_voyageai.rerank¶
Classes¶
rerank.VoyageAIRerank
Document compressor that uses VoyageAI Rerank API. | https://api.python.langchain.com/en/latest/voyageai_api_reference.html |
516871f008f3-0 | langchain_qdrant 0.1.3¶
langchain_qdrant.fastembed_sparse¶
Classes¶
fastembed_sparse.FastEmbedSparse([...])
An interface for sparse embedding models to use with Qdrant.
langchain_qdrant.qdrant¶
Classes¶
qdrant.QdrantVectorStore(client, collection_name)
Qdrant vector store integration.
qdrant.QdrantVectorStoreError
Qdra... | https://api.python.langchain.com/en/latest/qdrant_api_reference.html |
42a79037eacc-0 | langchain_azure_dynamic_sessions 0.1.0¶
langchain_azure_dynamic_sessions.tools¶
This package provides tools for managing dynamic sessions in Azure.
Classes¶
tools.sessions.RemoteFileMetadata(filename, ...)
Metadata for a file in the session.
tools.sessions.SessionsPythonREPLTool
Azure Dynamic Sessions tool. | https://api.python.langchain.com/en/latest/azure_dynamic_sessions_api_reference.html |
366d6205eece-0 | langchain_ollama 0.1.3¶
langchain_ollama.chat_models¶
Ollama chat models.
Classes¶
chat_models.ChatOllama
Ollama chat model integration.
langchain_ollama.embeddings¶
Classes¶
embeddings.OllamaEmbeddings
Ollama embedding model integration.
langchain_ollama.llms¶
Ollama large language models.
Classes¶
llms.OllamaLLM
Olla... | https://api.python.langchain.com/en/latest/ollama_api_reference.html |
c0ac85d4fc3f-0 | langchain_ibm 0.1.9¶ | https://api.python.langchain.com/en/latest/ibm_api_reference.html |
74a6bf334af8-0 | langchain_astradb 0.3.5¶
langchain_astradb.cache¶
Astra DB - based caches.
Classes¶
cache.AstraDBCache(*[, collection_name, ...])
Cache that uses Astra DB as a backend.
cache.AstraDBSemanticCache(*[, ...])
Astra DB semantic cache.
langchain_astradb.chat_message_histories¶
Astra DB - based chat message history, based on... | https://api.python.langchain.com/en/latest/astradb_api_reference.html |
74a6bf334af8-1 | langchain_astradb.vectorstores¶
Astra DB vector store integration.
Classes¶
vectorstores.AstraDBVectorStore(*, ...[, ...])
AstraDB vector store integration. | https://api.python.langchain.com/en/latest/astradb_api_reference.html |
1bf0c1cb282f-0 | langchain_postgres 0.0.10¶
langchain_postgres.chat_message_histories¶
Client for persisting chat message history in a Postgres database.
This client provides support for both sync and async via psycopg 3.
Classes¶
chat_message_histories.PostgresChatMessageHistory(...)
Client for persisting chat message history in a Pos... | https://api.python.langchain.com/en/latest/postgres_api_reference.html |
f9ca1d6aba47-0 | langchain_huggingface 0.0.3¶
langchain_huggingface.chat_models¶
Classes¶
chat_models.huggingface.ChatHuggingFace
Hugging Face LLM's as ChatModels.
chat_models.huggingface.TGI_MESSAGE(role, ...)
Message to send to the TextGenInference API.
chat_models.huggingface.TGI_RESPONSE(...)
Response from the TextGenInference API.... | https://api.python.langchain.com/en/latest/huggingface_api_reference.html |
9b07f0db9f33-0 | langchain_nomic 0.1.2¶ | https://api.python.langchain.com/en/latest/nomic_api_reference.html |
bf0f41b6994f-0 | langchain_aws 0.1.17¶
langchain_aws.agents¶
Classes¶
agents.base.BedrockAgentAction
AgentAction with session id information.
agents.base.BedrockAgentFinish
AgentFinish with session id information.
agents.base.BedrockAgentsRunnable
Invoke a Bedrock Agent
agents.base.GuardrailConfiguration
Functions¶
agents.base.get_boto... | https://api.python.langchain.com/en/latest/aws_api_reference.html |
bf0f41b6994f-1 | for supported model providers
Classes¶
function_calling.AnthropicTool
function_calling.FunctionDescription
Representation of a callable function to send to an LLM.
function_calling.ToolDescription
Representation of a callable function to the OpenAI API.
function_calling.ToolsOutputParser
Fields
Functions¶
function_call... | https://api.python.langchain.com/en/latest/aws_api_reference.html |
bf0f41b6994f-2 | llms.sagemaker_endpoint.enforce_stop_tokens(...)
Cut off the text as soon as any stop words occur.
Deprecated classes¶
llms.bedrock.Bedrock
Deprecated since version 0.1.0: Use BedrockLLM instead.
langchain_aws.retrievers¶
Classes¶
retrievers.bedrock.AmazonKnowledgeBasesRetriever
Amazon Bedrock Knowledge Bases retrieval... | https://api.python.langchain.com/en/latest/aws_api_reference.html |
bf0f41b6994f-3 | Classes¶
utilities.redis.TokenEscaper([escape_chars_re])
Escape punctuation within an input string.
utilities.utils.DistanceStrategy(value)
Enumerator of the Distance strategies for calculating distances between vectors.
Functions¶
utilities.math.cosine_similarity(X, Y)
Row-wise cosine similarity between two equal-widt... | https://api.python.langchain.com/en/latest/aws_api_reference.html |
bf0f41b6994f-4 | vectorstores.inmemorydb.filters.InMemoryDBNum(field)
InMemoryDBFilterField representing a numeric field in a InMemoryDB index.
vectorstores.inmemorydb.filters.InMemoryDBTag(field)
InMemoryDBFilterField representing a tag in a InMemoryDB index.
vectorstores.inmemorydb.filters.InMemoryDBText(field)
InMemoryDBFilterField ... | https://api.python.langchain.com/en/latest/aws_api_reference.html |
cea55fa46db7-0 | langchain_google_vertexai 1.0.10¶
langchain_google_vertexai.callbacks¶
Classes¶
callbacks.VertexAICallbackHandler()
Callback Handler that tracks VertexAI info.
langchain_google_vertexai.chains¶
Functions¶
chains.create_structured_runnable(function, ...)
Create a runnable sequence that uses OpenAI functions.
chains.get_... | https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html |
cea55fa46db7-1 | gemma.GemmaVertexAIModelGarden
Create a new model by parsing and validating input data from keyword arguments.
Functions¶
gemma.gemma_messages_to_prompt(history)
Converts a list of messages to a chat prompt for Gemma.
langchain_google_vertexai.llms¶
Classes¶
llms.VertexAI
Google Vertex AI large language models.
langcha... | https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html |
cea55fa46db7-2 | vectorstores.vectorstores.VectorSearchVectorStoreGCS(...)
Alias of VectorSearchVectorStore for consistency with the rest of vector stores with different document storage backends.
langchain_google_vertexai.vision_models¶
Classes¶
vision_models.VertexAIImageCaptioning
Implementation of the Image Captioning model as an L... | https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html |
1b56440ef051-0 | langchain_robocorp 0.0.10¶
langchain_robocorp.toolkits¶
Robocorp Action Server toolkit.
Classes¶
toolkits.ActionServerRequestTool
Requests POST tool with LLM-instructed extraction of truncated responses.
toolkits.ActionServerToolkit
Toolkit exposing Robocorp Action Server provided actions as individual tools.
toolkits.... | https://api.python.langchain.com/en/latest/robocorp_api_reference.html |
9d91b31140b8-0 | langchain_core.beta.runnables.context.config_with_context¶
langchain_core.beta.runnables.context.config_with_context(config: RunnableConfig, steps: List[Runnable]) → RunnableConfig[source]¶
Patch a runnable config with context getters and setters.
Parameters
config (RunnableConfig) – The runnable config.
steps (List[Ru... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.config_with_context.html |
bc17bd9ea70b-0 | langchain_core.beta.runnables.context.aconfig_with_context¶
langchain_core.beta.runnables.context.aconfig_with_context(config: RunnableConfig, steps: List[Runnable]) → RunnableConfig[source]¶
Asynchronously patch a runnable config with context getters and setters.
Parameters
config (RunnableConfig) – The runnable confi... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.aconfig_with_context.html |
d622528c18b6-0 | langchain_core.beta.runnables.context.Context¶
class langchain_core.beta.runnables.context.Context[source]¶
Context for a runnable.
The Context class provides methods for creating context scopes,
getters, and setters within a runnable. It allows for managing
and accessing contextual information throughout the execution... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.Context.html |
d622528c18b6-1 | Returns
The context scope.
Return type
PrefixContext
static getter(key: Union[str, List[str]], /) → ContextGet[source]¶
Parameters
key (Union[str, List[str]]) –
Return type
ContextGet
static setter(_key: Optional[str] = None, _value: Optional[Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], ... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.Context.html |
625454455d65-0 | langchain_core.beta.runnables.context.PrefixContext¶
class langchain_core.beta.runnables.context.PrefixContext(prefix: str = '')[source]¶
Context for a runnable with a prefix.
Attributes
prefix
Methods
__init__([prefix])
getter(key, /)
setter([_key, _value])
Parameters
prefix (str) –
__init__(prefix: str = '')[source]... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.PrefixContext.html |
f00406dcd4cd-0 | langchain_core.beta.runnables.context.ContextGet¶
Note
ContextGet implements the standard Runnable Interface. 🏃
The Runnable Interface has additional methods that are available on runnables, such as with_types, with_retry, assign, bind, get_graph, and more.
class langchain_core.beta.runnables.context.ContextGet[source... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-1 | Returns
A list of outputs from the Runnable.
Return type
List[Output]
async abatch_as_completed(inputs: Sequence[Input], config: Optional[Union[RunnableConfig, Sequence[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶
Run ainvo... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-2 | kwargs (Any) –
Return type
Any
as_tool(args_schema: Optional[Type[BaseModel]] = None, *, name: Optional[str] = None, description: Optional[str] = None, arg_types: Optional[Dict[str, Type]] = None) → BaseTool¶
Beta
This API is in beta and may change in the future.
Create a BaseTool from a Runnable.
as_tool will instant... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-3 | dict input, specifying schema via args_schema:
from typing import Any, Dict, List
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: Dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
"""Apply a function to an inte... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-4 | Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
Parameters
input (Input) – The input to the Runnable.
config (Optional[RunnableConfig]) – The config to use for the Runnable. Defaults to None.
kwargs (Optional[Any]) – Additional keyword arg... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-5 | The order of the parent IDs is from the root to the immediate parent.
Only available for v2 version of the API. The v1 version of the API
will return an empty list.
tags: Optional[List[str]] - The tags of the Runnable that generatedthe event.
metadata: Optional[Dict[str, Any]] - The metadata of the Runnablethat generat... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-6 | on_retriever_end
[retriever name]
{“query”: “hello”}
[Document(…), ..]
on_prompt_start
[template_name]
{“question”: “hello”}
on_prompt_end
[template_name]
{“question”: “hello”}
ChatPromptValue(messages: [SystemMessage, …])
In addition to the standard events, users can also dispatch custom events (see example below).
Cu... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-7 | ]
# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-8 | return "Done"
slow_thing = RunnableLambda(slow_thing)
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
Parameters
input (Any) – The input to the Runnable.
config (Optional[RunnableConfig]) – The config to use for the Runnable.
version (Literal['v1', 'v2']) – The version of the ... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-9 | Return type
AsyncIterator[Union[StandardStreamEvent, CustomStreamEvent]]
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool execut... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-10 | Configure alternatives for Runnables that can be set at runtime.
Parameters
which (ConfigurableField) – The ConfigurableField instance that will be used to select the
alternative.
default_key (str) – The default key to use if no alternative is selected.
Defaults to “default”.
prefix_keys (bool) – Whether to prefix the ... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-11 | Returns
A new Runnable with the fields configured.
Return type
RunnableSerializable[Input, Output]
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
f00406dcd4cd-12 | kwargs (Optional[Any]) – Additional keyword arguments to pass to the Runnable.
Yields
The output of the Runnable.
Return type
Iterator[Output]
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
Serialize the Runnable to JSON.
Returns
A JSON-serializable representation of the Runnable.
Return type
Union... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
d27bdcf40db0-0 | langchain_core.beta.runnables.context.ContextSet¶
Note
ContextSet implements the standard Runnable Interface. 🏃
The Runnable Interface has additional methods that are available on runnables, such as with_types, with_retry, assign, bind, get_graph, and more.
class langchain_core.beta.runnables.context.ContextSet[source... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-1 | Returns
A list of outputs from the Runnable.
Return type
List[Output]
async abatch_as_completed(inputs: Sequence[Input], config: Optional[Union[RunnableConfig, Sequence[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶
Run ainvo... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-2 | kwargs (Any) –
Return type
Any
as_tool(args_schema: Optional[Type[BaseModel]] = None, *, name: Optional[str] = None, description: Optional[str] = None, arg_types: Optional[Dict[str, Type]] = None) → BaseTool¶
Beta
This API is in beta and may change in the future.
Create a BaseTool from a Runnable.
as_tool will instant... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-3 | dict input, specifying schema via args_schema:
from typing import Any, Dict, List
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: Dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
"""Apply a function to an inte... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-4 | Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
Parameters
input (Input) – The input to the Runnable.
config (Optional[RunnableConfig]) – The config to use for the Runnable. Defaults to None.
kwargs (Optional[Any]) – Additional keyword arg... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-5 | The order of the parent IDs is from the root to the immediate parent.
Only available for v2 version of the API. The v1 version of the API
will return an empty list.
tags: Optional[List[str]] - The tags of the Runnable that generatedthe event.
metadata: Optional[Dict[str, Any]] - The metadata of the Runnablethat generat... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-6 | on_retriever_end
[retriever name]
{“query”: “hello”}
[Document(…), ..]
on_prompt_start
[template_name]
{“question”: “hello”}
on_prompt_end
[template_name]
{“question”: “hello”}
ChatPromptValue(messages: [SystemMessage, …])
In addition to the standard events, users can also dispatch custom events (see example below).
Cu... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-7 | ]
# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-8 | return "Done"
slow_thing = RunnableLambda(slow_thing)
async for event in slow_thing.astream_events("some_input", version="v2"):
print(event)
Parameters
input (Any) – The input to the Runnable.
config (Optional[RunnableConfig]) – The config to use for the Runnable.
version (Literal['v1', 'v2']) – The version of the ... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-9 | Return type
AsyncIterator[Union[StandardStreamEvent, CustomStreamEvent]]
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool execut... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-10 | Configure alternatives for Runnables that can be set at runtime.
Parameters
which (ConfigurableField) – The ConfigurableField instance that will be used to select the
alternative.
default_key (str) – The default key to use if no alternative is selected.
Defaults to “default”.
prefix_keys (bool) – Whether to prefix the ... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-11 | Returns
A new Runnable with the fields configured.
Return type
RunnableSerializable[Input, Output]
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
d27bdcf40db0-12 | kwargs (Optional[Any]) – Additional keyword arguments to pass to the Runnable.
Yields
The output of the Runnable.
Return type
Iterator[Output]
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
Serialize the Runnable to JSON.
Returns
A JSON-serializable representation of the Runnable.
Return type
Union... | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
04c90c3b413c-0 | langchain_community.chat_loaders.imessage.nanoseconds_from_2001_to_datetime¶
langchain_community.chat_loaders.imessage.nanoseconds_from_2001_to_datetime(nanoseconds: int) → datetime[source]¶
Convert nanoseconds since 2001 to a datetime object.
Parameters
nanoseconds (int) – Nanoseconds since January 1, 2001.
Returns
Da... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.imessage.nanoseconds_from_2001_to_datetime.html |
aa78b272f693-0 | langchain_community.chat_loaders.gmail.GMailLoader¶
class langchain_community.chat_loaders.gmail.GMailLoader(creds: Any, n: int = 100, raise_error: bool = False)[source]¶
Deprecated since version 0.0.32: Use langchain_google_community.GMailLoader instead.
Load data from GMail.
There are many ways you could want to load... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.gmail.GMailLoader.html |
aa78b272f693-1 | Returns
An iterator of chat sessions.
Return type
Iterator[ChatSession]
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Returns
A list of chat sessions.
Return type
List[ChatSession]
Examples using GMailLoader¶
GMail | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.gmail.GMailLoader.html |
eae9f9364be4-0 | langchain_community.chat_loaders.imessage.IMessageChatLoader¶
class langchain_community.chat_loaders.imessage.IMessageChatLoader(path: Optional[Union[str, Path]] = None)[source]¶
Load chat sessions from the iMessage chat.db SQLite file.
It only works on macOS when you have iMessage enabled and have the chat.db file.
Th... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.imessage.IMessageChatLoader.html |
eae9f9364be4-1 | Eagerly load the chat sessions into memory.
Returns
A list of chat sessions.
Return type
List[ChatSession]
Examples using IMessageChatLoader¶
iMessage | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.imessage.IMessageChatLoader.html |
6b4df8dc4ccf-0 | langchain_community.chat_loaders.facebook_messenger.SingleFileFacebookMessengerChatLoader¶
class langchain_community.chat_loaders.facebook_messenger.SingleFileFacebookMessengerChatLoader(path: Union[Path, str])[source]¶
Load Facebook Messenger chat data from a single file.
Parameters
path (Union[Path, str]) – The path ... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.facebook_messenger.SingleFileFacebookMessengerChatLoader.html |
0dd01c4c54f8-0 | langchain_community.chat_loaders.facebook_messenger.FolderFacebookMessengerChatLoader¶
class langchain_community.chat_loaders.facebook_messenger.FolderFacebookMessengerChatLoader(path: Union[str, Path])[source]¶
Load Facebook Messenger chat data from a folder.
Parameters
path (Union[str, Path]) – The path to the direct... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.facebook_messenger.FolderFacebookMessengerChatLoader.html |
9bbadfc47056-0 | langchain_community.chat_loaders.utils.map_ai_messages_in_session¶
langchain_community.chat_loaders.utils.map_ai_messages_in_session(chat_sessions: ChatSession, sender: str) → ChatSession[source]¶
Convert messages from the specified ‘sender’ to AI messages.
This is useful for fine-tuning the AI to adapt to your voice.
... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.utils.map_ai_messages_in_session.html |
ba37527dfd29-0 | langchain_community.chat_loaders.slack.SlackChatLoader¶
class langchain_community.chat_loaders.slack.SlackChatLoader(path: Union[str, Path])[source]¶
Load Slack conversations from a dump zip file.
Initialize the chat loader with the path to the exported Slack dump zip file.
Parameters
path (Union[str, Path]) – Path to ... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.slack.SlackChatLoader.html |
f50e082da98a-0 | langchain_community.chat_loaders.telegram.TelegramChatLoader¶
class langchain_community.chat_loaders.telegram.TelegramChatLoader(path: Union[str, Path])[source]¶
Load telegram conversations to LangChain chat messages.
To export, use the Telegram Desktop app from
https://desktop.telegram.org/, select a conversation, cli... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.telegram.TelegramChatLoader.html |
3e2cfc1287ac-0 | langchain_community.chat_loaders.utils.merge_chat_runs¶
langchain_community.chat_loaders.utils.merge_chat_runs(chat_sessions: Iterable[ChatSession]) → Iterator[ChatSession][source]¶
Merge chat runs together.
A chat run is a sequence of messages from the same sender.
Parameters
chat_sessions (Iterable[ChatSession]) – A ... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.utils.merge_chat_runs.html |
7336152654fd-0 | langchain_core.chat_loaders.BaseChatLoader¶
class langchain_core.chat_loaders.BaseChatLoader[source]¶
Base class for chat loaders.
Methods
__init__()
lazy_load()
Lazy load the chat sessions.
load()
Eagerly load the chat sessions into memory.
__init__()¶
abstract lazy_load() → Iterator[ChatSession][source]¶
Lazy load th... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_core.chat_loaders.BaseChatLoader.html |
d6e42b9775b0-0 | langchain_community.chat_loaders.whatsapp.WhatsAppChatLoader¶
class langchain_community.chat_loaders.whatsapp.WhatsAppChatLoader(path: str)[source]¶
Load WhatsApp conversations from a dump zip file or directory.
Initialize the WhatsAppChatLoader.
Parameters
path (str) – Path to the exported WhatsApp chat
zip directory,... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.whatsapp.WhatsAppChatLoader.html |
2ed79512f824-0 | langchain_community.chat_loaders.langsmith.LangSmithDatasetChatLoader¶
class langchain_community.chat_loaders.langsmith.LangSmithDatasetChatLoader(*, dataset_name: str, client: Optional['Client'] = None)[source]¶
Load chat sessions from a LangSmith dataset with the “chat” data type.
dataset_name¶
The name of the LangSm... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.langsmith.LangSmithDatasetChatLoader.html |
2ed79512f824-1 | Eagerly load the chat sessions into memory.
Returns
A list of chat sessions.
Return type
List[ChatSession]
Examples using LangSmithDatasetChatLoader¶
LangSmith Chat Datasets | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.langsmith.LangSmithDatasetChatLoader.html |
496caf4fee23-0 | langchain_community.chat_loaders.utils.map_ai_messages¶
langchain_community.chat_loaders.utils.map_ai_messages(chat_sessions: Iterable[ChatSession], sender: str) → Iterator[ChatSession][source]¶
Convert messages from the specified ‘sender’ to AI messages.
This is useful for fine-tuning the AI to adapt to your voice.
Pa... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.utils.map_ai_messages.html |
3bdfcd5f0a57-0 | langchain_community.chat_loaders.utils.merge_chat_runs_in_session¶
langchain_community.chat_loaders.utils.merge_chat_runs_in_session(chat_session: ChatSession, delimiter: str = '\n\n') → ChatSession[source]¶
Merge chat runs together in a chat session.
A chat run is a sequence of messages from the same sender.
Parameter... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.utils.merge_chat_runs_in_session.html |
cb024f0dd980-0 | langchain_community.chat_loaders.langsmith.LangSmithRunChatLoader¶
class langchain_community.chat_loaders.langsmith.LangSmithRunChatLoader(runs: Iterable[Union[str, Run]], client: Optional['Client'] = None)[source]¶
Load chat sessions from a list of LangSmith “llm” runs.
runs¶
The list of LLM run IDs or run objects.
Ty... | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.langsmith.LangSmithRunChatLoader.html |
cb024f0dd980-1 | Return type
Iterator[ChatSession]
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Returns
A list of chat sessions.
Return type
List[ChatSession]
Examples using LangSmithRunChatLoader¶
LangSmith LLM Runs | https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.langsmith.LangSmithRunChatLoader.html |
219f37164a9c-0 | langchain.globals.set_verbose¶
langchain.globals.set_verbose(value: bool) → None[source]¶
Set a new value for the verbose global setting.
Parameters
value (bool) –
Return type
None
Examples using set_verbose¶
How to debug your LLM apps
OpaquePrompts | https://api.python.langchain.com/en/latest/globals/langchain.globals.set_verbose.html |
bc2ac8bbf954-0 | langchain_core.globals.set_debug¶
langchain_core.globals.set_debug(value: bool) → None[source]¶
Set a new value for the debug global setting.
Parameters
value (bool) – The new value for the debug global setting.
Return type
None
Examples using set_debug¶
Bittensor
Document Comparison
How to debug your LLM apps
OpaquePr... | https://api.python.langchain.com/en/latest/globals/langchain_core.globals.set_debug.html |
94519d67e98b-0 | langchain_core.globals.get_verbose¶
langchain_core.globals.get_verbose() → bool[source]¶
Get the value of the verbose global setting.
Returns
The value of the verbose global setting.
Return type
bool | https://api.python.langchain.com/en/latest/globals/langchain_core.globals.get_verbose.html |
da6f9eef2df5-0 | langchain_core.globals.set_llm_cache¶
langchain_core.globals.set_llm_cache(value: Optional[BaseCache]) → None[source]¶
Set a new LLM cache, overwriting the previous value, if any.
Parameters
value (Optional[BaseCache]) – The new LLM cache to use. If None, the LLM cache is disabled.
Return type
None
Examples using set_l... | https://api.python.langchain.com/en/latest/globals/langchain_core.globals.set_llm_cache.html |
9c8eb1d46966-0 | langchain_core.globals.get_llm_cache¶
langchain_core.globals.get_llm_cache() → BaseCache[source]¶
Get the value of the llm_cache global setting.
Returns
The value of the llm_cache global setting.
Return type
BaseCache | https://api.python.langchain.com/en/latest/globals/langchain_core.globals.get_llm_cache.html |
50fbad74aa84-0 | langchain_core.globals.get_debug¶
langchain_core.globals.get_debug() → bool[source]¶
Get the value of the debug global setting.
Returns
The value of the debug global setting.
Return type
bool | https://api.python.langchain.com/en/latest/globals/langchain_core.globals.get_debug.html |
76891e2d8aa1-0 | langchain.globals.get_llm_cache¶
langchain.globals.get_llm_cache() → BaseCache[source]¶
Get the value of the llm_cache global setting.
Return type
BaseCache | https://api.python.langchain.com/en/latest/globals/langchain.globals.get_llm_cache.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.