id
stringlengths 14
15
| text
stringlengths 13
2.7k
| source
stringlengths 60
181
|
---|---|---|
9229cb2071a2-70
|
Serializes data in json using the json package from python standard library.
vectorstores.sklearn.ParquetSerializer(...)
Serializes data in Apache Parquet format using the pyarrow package.
vectorstores.sklearn.SKLearnVectorStore(...)
Simple in-memory vector store based on the scikit-learn library NearestNeighbors.
vectorstores.sklearn.SKLearnVectorStoreException
Exception raised by SKLearnVectorStore.
vectorstores.sqlitevss.SQLiteVSS(table, ...)
Wrapper around SQLite with vss extension as a vector database.
vectorstores.starrocks.StarRocks(embedding)
StarRocks vector store.
vectorstores.starrocks.StarRocksSettings
StarRocks client configuration.
vectorstores.supabase.SupabaseVectorStore(...)
Supabase Postgres vector store.
vectorstores.surrealdb.SurrealDBStore(...)
SurrealDB as Vector Store.
vectorstores.tair.Tair(embedding_function, ...)
Tair vector store.
vectorstores.tencentvectordb.ConnectionParams(...)
Tencent vector DB Connection params.
vectorstores.tencentvectordb.IndexParams(...)
Tencent vector DB Index params.
vectorstores.tencentvectordb.TencentVectorDB(...)
Tencent VectorDB as a vector store.
vectorstores.tigris.Tigris(client, ...)
Tigris vector store.
vectorstores.tiledb.TileDB(embedding, ...[, ...])
TileDB vector store.
vectorstores.timescalevector.TimescaleVector(...)
Timescale Postgres vector store
vectorstores.typesense.Typesense(...[, ...])
Typesense vector store.
vectorstores.usearch.USearch(embedding, ...)
USearch vector store.
vectorstores.utils.DistanceStrategy(value[, ...])
Enumerator of the Distance strategies for calculating distances between vectors.
|
https://api.python.langchain.com/en/latest/community_api_reference.html
|
9229cb2071a2-71
|
Enumerator of the Distance strategies for calculating distances between vectors.
vectorstores.vald.Vald(embedding[, host, ...])
Wrapper around Vald vector database.
vectorstores.vearch.Vearch(embedding_function)
Initialize vearch vector store flag 1 for cluster,0 for standalone
vectorstores.vectara.Vectara([...])
Vectara API vector store.
vectorstores.vectara.VectaraRetriever
Retriever class for Vectara.
vectorstores.vespa.VespaStore(app[, ...])
Vespa vector store.
vectorstores.weaviate.Weaviate(client, ...)
Weaviate vector store.
vectorstores.xata.XataVectorStore(api_key, ...)
Xata vector store.
vectorstores.yellowbrick.Yellowbrick(...)
Wrapper around Yellowbrick as a vector database.
vectorstores.zep.CollectionConfig(name, ...)
Configuration for a Zep Collection.
vectorstores.zep.ZepVectorStore(...[, ...])
Zep vector store.
vectorstores.zilliz.Zilliz(embedding_function)
Zilliz vector store.
Functions¶
vectorstores.alibabacloud_opensearch.create_metadata(fields)
Create metadata from fields.
vectorstores.annoy.dependable_annoy_import()
Import annoy if available, otherwise raise error.
vectorstores.clickhouse.has_mul_sub_str(s, *args)
Check if a string contains multiple substrings.
vectorstores.faiss.dependable_faiss_import([...])
Import faiss if available, otherwise raise error.
vectorstores.myscale.has_mul_sub_str(s, *args)
Check if a string contains multiple substrings.
vectorstores.neo4j_vector.check_if_not_null(...)
Check if the values are not None or empty string
vectorstores.neo4j_vector.remove_lucene_chars(text)
Remove Lucene special characters
|
https://api.python.langchain.com/en/latest/community_api_reference.html
|
9229cb2071a2-72
|
vectorstores.neo4j_vector.remove_lucene_chars(text)
Remove Lucene special characters
vectorstores.neo4j_vector.sort_by_index_name(...)
Sort first element to match the index_name if exists
vectorstores.qdrant.sync_call_fallback(method)
Decorator to call the synchronous method of the class if the async method is not implemented.
vectorstores.redis.base.check_index_exists(...)
Check if Redis index exists.
vectorstores.redis.filters.check_operator_misuse(func)
Decorator to check for misuse of equality operators.
vectorstores.redis.schema.read_schema(...)
Reads in the index schema from a dict or yaml file.
vectorstores.scann.dependable_scann_import()
Import scann if available, otherwise raise error.
vectorstores.scann.normalize(x)
Normalize vectors to unit length.
vectorstores.starrocks.debug_output(s)
Print a debug message if DEBUG is True.
vectorstores.starrocks.get_named_result(...)
Get a named result from a query.
vectorstores.starrocks.has_mul_sub_str(s, *args)
Check if a string has multiple substrings.
vectorstores.tiledb.dependable_tiledb_import()
Import tiledb-vector-search if available, otherwise raise error.
vectorstores.tiledb.get_documents_array_uri(uri)
Get the URI of the documents array.
vectorstores.tiledb.get_documents_array_uri_from_group(group)
Get the URI of the documents array from group.
vectorstores.tiledb.get_vector_index_uri(uri)
Get the URI of the vector index.
vectorstores.tiledb.get_vector_index_uri_from_group(group)
Get the URI of the vector index.
vectorstores.usearch.dependable_usearch_import()
Import usearch if available, otherwise raise error.
vectorstores.utils.filter_complex_metadata(...)
Filter out metadata types that are not supported for a vector store.
vectorstores.utils.maximal_marginal_relevance(...)
|
https://api.python.langchain.com/en/latest/community_api_reference.html
|
9229cb2071a2-73
|
vectorstores.utils.maximal_marginal_relevance(...)
Calculate maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/community_api_reference.html
|
970383df46ad-0
|
langchain 0.0.351¶
langchain.agents¶
Agent is a class that uses an LLM to choose a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents,
a language model is used as a reasoning engine to determine which actions
to take and in which order.
Agents select and use Tools and Toolkits for actions.
Class hierarchy:
BaseSingleActionAgent --> LLMSingleActionAgent
OpenAIFunctionsAgent
XMLAgent
Agent --> <name>Agent # Examples: ZeroShotAgent, ChatAgent
BaseMultiActionAgent --> OpenAIMultiFunctionsAgent
Main helpers:
AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish
Classes¶
agents.agent.Agent
Agent that calls the language model and deciding the action.
agents.agent.AgentExecutor
Agent that is using tools.
agents.agent.AgentOutputParser
Base class for parsing agent output into agent action/finish.
agents.agent.BaseMultiActionAgent
Base Multi Action Agent class.
agents.agent.BaseSingleActionAgent
Base Single Action Agent class.
agents.agent.ExceptionTool
Tool that just returns the query.
agents.agent.LLMSingleActionAgent
Base class for single action agents.
agents.agent.MultiActionAgentOutputParser
Base class for parsing agent output into agent actions/finish.
agents.agent.RunnableAgent
Agent powered by runnables.
agents.agent.RunnableMultiActionAgent
Agent powered by runnables.
agents.agent_iterator.AgentExecutorIterator(...)
Iterator for AgentExecutor.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo
Information about a VectorStore.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit
Toolkit for routing between Vector Stores.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit
Toolkit for interacting with a Vector Store.
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-1
|
Toolkit for interacting with a Vector Store.
agents.agent_types.AgentType(value[, names, ...])
An enum for agent types.
agents.chat.base.ChatAgent
Chat Agent.
agents.chat.output_parser.ChatOutputParser
Output parser for the chat agent.
agents.conversational.base.ConversationalAgent
An agent that holds a conversation in addition to using tools.
agents.conversational.output_parser.ConvoOutputParser
Output parser for the conversational agent.
agents.conversational_chat.base.ConversationalChatAgent
An agent designed to hold a conversation in addition to using tools.
agents.conversational_chat.output_parser.ConvoOutputParser
Output parser for the conversational agent.
agents.mrkl.base.ChainConfig(action_name, ...)
Configuration for chain to use in MRKL system.
agents.mrkl.base.MRKLChain
[Deprecated] Chain that implements the MRKL system.
agents.mrkl.base.ZeroShotAgent
Agent for the MRKL chain.
agents.mrkl.output_parser.MRKLOutputParser
MRKL Output parser for the chat agent.
agents.openai_assistant.base.OpenAIAssistantAction
AgentAction with info needed to submit custom tool output to existing run.
agents.openai_assistant.base.OpenAIAssistantFinish
AgentFinish with run and thread metadata.
agents.openai_assistant.base.OpenAIAssistantRunnable
Run an OpenAI Assistant.
agents.openai_functions_agent.agent_token_buffer_memory.AgentTokenBufferMemory
Memory used to save agent output AND intermediate steps.
agents.openai_functions_agent.base.OpenAIFunctionsAgent
An Agent driven by OpenAIs function powered API.
agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent
An Agent driven by OpenAIs function powered API.
agents.output_parsers.json.JSONAgentOutputParser
Parses tool invocations and final answers in JSON format.
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-2
|
Parses tool invocations and final answers in JSON format.
agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser
Parses a message into agent action/finish.
agents.output_parsers.openai_tools.OpenAIToolAgentAction
Override init to support instantiation by position for backward compat.
agents.output_parsers.openai_tools.OpenAIToolsAgentOutputParser
Parses a message into agent actions/finish.
agents.output_parsers.react_json_single_input.ReActJsonSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input in json format.
agents.output_parsers.react_single_input.ReActSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input.
agents.output_parsers.self_ask.SelfAskOutputParser
Parses self-ask style LLM calls.
agents.output_parsers.xml.XMLAgentOutputParser
Parses tool invocations and final answers in XML format.
agents.react.base.DocstoreExplorer(docstore)
Class to assist with exploration of a document store.
agents.react.base.ReActChain
[Deprecated] Chain that implements the ReAct paper.
agents.react.base.ReActDocstoreAgent
Agent for the ReAct chain.
agents.react.base.ReActTextWorldAgent
Agent for the ReAct TextWorld chain.
agents.react.output_parser.ReActOutputParser
Output parser for the ReAct agent.
agents.schema.AgentScratchPadChatPromptTemplate
Chat prompt template for the agent scratchpad.
agents.self_ask_with_search.base.SelfAskWithSearchAgent
Agent for the self-ask-with-search paper.
agents.self_ask_with_search.base.SelfAskWithSearchChain
[Deprecated] Chain that does self-ask with search.
agents.structured_chat.base.StructuredChatAgent
Structured Chat Agent.
agents.structured_chat.output_parser.StructuredChatOutputParser
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-3
|
Structured Chat Agent.
agents.structured_chat.output_parser.StructuredChatOutputParser
Output parser for the structured chat agent.
agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries
Output parser with retries for the structured chat agent.
agents.tools.InvalidTool
Tool that is run when invalid tool name is encountered by agent.
agents.xml.base.XMLAgent
Agent that uses XML tags.
Functions¶
agents.agent_toolkits.conversational_retrieval.openai_functions.create_conversational_retrieval_agent(...)
A convenience method for creating a conversational retrieval agent.
agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(...)
Construct a VectorStore agent from an LLM and tools.
agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(...)
Construct a VectorStore router agent from an LLM and tools.
agents.format_scratchpad.log.format_log_to_str(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.log_to_messages.format_log_to_messages(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.openai_functions.format_to_openai_function_messages(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.openai_functions.format_to_openai_functions(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.openai_tools.format_to_openai_tool_messages(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.xml.format_xml(...)
Format the intermediate steps as XML.
agents.initialize.initialize_agent(tools, llm)
Load an agent executor given tools and LLM.
agents.load_tools.get_all_tool_names()
Get a list of all possible tool names.
agents.load_tools.load_huggingface_tool(...)
Loads a tool from the HuggingFace Hub.
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-4
|
Loads a tool from the HuggingFace Hub.
agents.load_tools.load_tools(tool_names[, ...])
Load tools based on their name.
agents.loading.load_agent(path, **kwargs)
Unified method for loading an agent from LangChainHub or local fs.
agents.loading.load_agent_from_config(config)
Load agent from Config Dict.
agents.output_parsers.openai_tools.parse_ai_message_to_openai_tool_action(message)
Parse an AI message potentially containing tool_calls.
agents.utils.validate_tools_single_input(...)
Validate tools for single input.
langchain.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
callbacks.file.FileCallbackHandler(filename)
Callback Handler that writes to a file.
callbacks.streaming_aiter.AsyncIteratorCallbackHandler()
Callback handler that returns an async iterator.
callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*)
Callback handler that returns an async iterator.
callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*)
Callback handler for streaming in agents.
callbacks.tracers.logging.LoggingCallbackHandler(logger)
Tracer that logs via the input Logger.
langchain.chains¶
Chains are easily reusable components linked together.
Chains encode a sequence of calls to components like models, document retrievers,
other Chains, etc., and provide a simple interface to this sequence.
The Chain interface makes it easy to create apps that are:
Stateful: add Memory to any Chain to give it state,
Observable: pass Callbacks to a Chain to execute additional functionality,
like logging, outside the main sequence of component calls,
Composable: combine Chains with other components, including other Chains.
Class hierarchy:
Chain --> <name>Chain # Examples: LLMChain, MapReduceChain, RouterChain
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-5
|
Classes¶
chains.api.base.APIChain
Chain that makes API calls and summarizes the responses to answer a question.
chains.api.openapi.chain.OpenAPIEndpointChain
Chain interacts with an OpenAPI endpoint using natural language.
chains.api.openapi.requests_chain.APIRequesterChain
Get the request parser.
chains.api.openapi.requests_chain.APIRequesterOutputParser
Parse the request and error tags.
chains.api.openapi.response_chain.APIResponderChain
Get the response parser.
chains.api.openapi.response_chain.APIResponderOutputParser
Parse the response and error tags.
chains.base.Chain
Abstract base class for creating structured sequences of calls to components.
chains.combine_documents.base.AnalyzeDocumentChain
Chain that splits documents, then analyzes it in pieces.
chains.combine_documents.base.BaseCombineDocumentsChain
Base interface for chains combining documents.
chains.combine_documents.map_reduce.MapReduceDocumentsChain
Combining documents by mapping a chain over them, then combining results.
chains.combine_documents.map_rerank.MapRerankDocumentsChain
Combining documents by mapping a chain over them, then reranking results.
chains.combine_documents.reduce.AsyncCombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.CombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.ReduceDocumentsChain
Combine documents by recursively reducing them.
chains.combine_documents.refine.RefineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
chains.combine_documents.stuff.StuffDocumentsChain
Chain that combines documents by stuffing into context.
chains.constitutional_ai.base.ConstitutionalChain
Chain for applying constitutional principles.
chains.constitutional_ai.models.ConstitutionalPrinciple
Class for a constitutional principle.
chains.conversation.base.ConversationChain
Chain to have a conversation and load context from memory.
chains.conversational_retrieval.base.BaseConversationalRetrievalChain
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-6
|
chains.conversational_retrieval.base.BaseConversationalRetrievalChain
Chain for chatting with an index.
chains.conversational_retrieval.base.ChatVectorDBChain
Chain for chatting with a vector database.
chains.conversational_retrieval.base.ConversationalRetrievalChain
Chain for having a conversation based on retrieved documents.
chains.conversational_retrieval.base.InputType
Input type for ConversationalRetrievalChain.
chains.elasticsearch_database.base.ElasticsearchDatabaseChain
Chain for interacting with Elasticsearch Database.
chains.flare.base.FlareChain
Chain that combines a retriever, a question generator, and a response generator.
chains.flare.base.QuestionGeneratorChain
Chain that generates questions from uncertain spans.
chains.flare.prompts.FinishedOutputParser
Output parser that checks if the output is finished.
chains.graph_qa.arangodb.ArangoGraphQAChain
Chain for question-answering against a graph by generating AQL statements.
chains.graph_qa.base.GraphQAChain
Chain for question-answering against a graph.
chains.graph_qa.cypher.GraphCypherQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.cypher_utils.CypherQueryCorrector(schemas)
Used to correct relationship direction in generated Cypher statements.
chains.graph_qa.cypher_utils.Schema(...)
Create new instance of Schema(left_node, relation, right_node)
chains.graph_qa.falkordb.FalkorDBQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.hugegraph.HugeGraphQAChain
Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.kuzu.KuzuQAChain
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-7
|
chains.graph_qa.kuzu.KuzuQAChain
Question-answering against a graph by generating Cypher statements for Kùzu.
chains.graph_qa.nebulagraph.NebulaGraphQAChain
Chain for question-answering against a graph by generating nGQL statements.
chains.graph_qa.neptune_cypher.NeptuneOpenCypherQAChain
Chain for question-answering against a Neptune graph by generating openCypher statements.
chains.graph_qa.sparql.GraphSparqlQAChain
Question-answering against an RDF or OWL graph by generating SPARQL statements.
chains.hyde.base.HypotheticalDocumentEmbedder
Generate hypothetical document for query, and then embed that.
chains.llm.LLMChain
Chain to run queries against LLMs.
chains.llm_checker.base.LLMCheckerChain
Chain for question-answering with self-verification.
chains.llm_math.base.LLMMathChain
Chain that interprets a prompt and executes python code to do math.
chains.llm_requests.LLMRequestsChain
Chain that requests a URL and then uses an LLM to parse results.
chains.llm_summarization_checker.base.LLMSummarizationCheckerChain
Chain for question-answering with self-verification.
chains.mapreduce.MapReduceChain
Map-reduce chain.
chains.moderation.OpenAIModerationChain
Pass input through a moderation endpoint.
chains.natbot.base.NatBotChain
Implement an LLM driven browser.
chains.natbot.crawler.Crawler()
A crawler for web pages.
chains.natbot.crawler.ElementInViewPort
A typed dictionary containing information about elements in the viewport.
chains.openai_functions.citation_fuzzy_match.FactWithEvidence
Class representing a single statement.
chains.openai_functions.citation_fuzzy_match.QuestionAnswer
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-8
|
Class representing a single statement.
chains.openai_functions.citation_fuzzy_match.QuestionAnswer
A question and its answer as a list of facts each one should have a source.
chains.openai_functions.openapi.SimpleRequestChain
Chain for making a simple request to an API endpoint.
chains.openai_functions.qa_with_structure.AnswerWithSources
An answer to the question, with sources.
chains.prompt_selector.BasePromptSelector
Base class for prompt selectors.
chains.prompt_selector.ConditionalPromptSelector
Prompt collection that goes through conditionals.
chains.qa_generation.base.QAGenerationChain
Base class for question-answer generation chains.
chains.qa_with_sources.base.BaseQAWithSourcesChain
Question answering chain with sources over documents.
chains.qa_with_sources.base.QAWithSourcesChain
Question answering with sources over documents.
chains.qa_with_sources.loading.LoadingCallable(...)
Interface for loading the combine documents chain.
chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain
Question-answering with sources over an index.
chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain
Question-answering with sources over a vector database.
chains.query_constructor.base.StructuredQueryOutputParser
Output parser that parses a structured query.
chains.query_constructor.ir.Comparator(value)
Enumerator of the comparison operators.
chains.query_constructor.ir.Comparison
A comparison to a value.
chains.query_constructor.ir.Expr
Base class for all expressions.
chains.query_constructor.ir.FilterDirective
A filtering expression.
chains.query_constructor.ir.Operation
A logical operation over other directives.
chains.query_constructor.ir.Operator(value)
Enumerator of the operations.
chains.query_constructor.ir.StructuredQuery
A structured query.
chains.query_constructor.ir.Visitor()
Defines interface for IR translation using visitor pattern.
chains.query_constructor.parser.ISO8601Date
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-9
|
Defines interface for IR translation using visitor pattern.
chains.query_constructor.parser.ISO8601Date
A date in ISO 8601 format (YYYY-MM-DD).
chains.query_constructor.schema.AttributeInfo
Information about a data source attribute.
chains.retrieval_qa.base.BaseRetrievalQA
Base class for question-answering chains.
chains.retrieval_qa.base.RetrievalQA
Chain for question-answering against an index.
chains.retrieval_qa.base.VectorDBQA
Chain for question-answering against a vector database.
chains.router.base.MultiRouteChain
Use a single chain to route an input to one of multiple candidate chains.
chains.router.base.Route(destination, ...)
Create new instance of Route(destination, next_inputs)
chains.router.base.RouterChain
Chain that outputs the name of a destination chain and the inputs to it.
chains.router.embedding_router.EmbeddingRouterChain
Chain that uses embeddings to route between options.
chains.router.llm_router.LLMRouterChain
A router chain that uses an LLM chain to perform routing.
chains.router.llm_router.RouterOutputParser
Parser for output of router chain in the multi-prompt chain.
chains.router.multi_prompt.MultiPromptChain
A multi-route chain that uses an LLM router chain to choose amongst prompts.
chains.router.multi_retrieval_qa.MultiRetrievalQAChain
A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains.
chains.sequential.SequentialChain
Chain where the outputs of one chain feed directly into next.
chains.sequential.SimpleSequentialChain
Simple chain where the outputs of one step feed directly into next.
chains.sql_database.query.SQLInput
Input for a SQL Chain.
chains.sql_database.query.SQLInputWithTables
Input for a SQL Chain.
chains.transform.TransformChain
Chain that transforms the chain output.
Functions¶
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-10
|
chains.transform.TransformChain
Chain that transforms the chain output.
Functions¶
chains.combine_documents.reduce.acollapse_docs(...)
Execute a collapse function on a set of documents and merge their metadatas.
chains.combine_documents.reduce.collapse_docs(...)
Execute a collapse function on a set of documents and merge their metadatas.
chains.combine_documents.reduce.split_list_of_docs(...)
Split Documents into subsets that each meet a cumulative length constraint.
chains.ernie_functions.base.convert_python_function_to_ernie_function(...)
Convert a Python function to an Ernie function-calling API compatible dict.
chains.ernie_functions.base.convert_to_ernie_function(...)
Convert a raw function/class to an Ernie function.
chains.ernie_functions.base.create_ernie_fn_chain(...)
[Legacy] Create an LLM chain that uses Ernie functions.
chains.ernie_functions.base.create_ernie_fn_runnable(...)
Create a runnable sequence that uses Ernie functions.
chains.ernie_functions.base.create_structured_output_chain(...)
[Legacy] Create an LLMChain that uses an Ernie function to get a structured output.
chains.ernie_functions.base.create_structured_output_runnable(...)
Create a runnable that uses an Ernie function to get a structured output.
chains.ernie_functions.base.get_ernie_output_parser(...)
Get the appropriate function output parser given the user functions.
chains.example_generator.generate_example(...)
Return another example given a list of examples for a prompt.
chains.graph_qa.cypher.construct_schema(...)
Filter the schema based on included or excluded types
chains.graph_qa.cypher.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.falkordb.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.neptune_cypher.extract_cypher(text)
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-11
|
chains.graph_qa.neptune_cypher.extract_cypher(text)
Extract Cypher code from text using Regex.
chains.graph_qa.neptune_cypher.trim_query(query)
Trim the query to only include Cypher keywords.
chains.graph_qa.neptune_cypher.use_simple_prompt(llm)
Decides whether to use the simple prompt
chains.loading.load_chain(path, **kwargs)
Unified method for loading a chain from LangChainHub or local fs.
chains.loading.load_chain_from_config(...)
Load chain from Config Dict.
chains.openai_functions.base.convert_python_function_to_openai_function(...)
Convert a Python function to an OpenAI function-calling API compatible dict.
chains.openai_functions.base.convert_to_openai_function(...)
Convert a raw function/class to an OpenAI function.
chains.openai_functions.base.create_openai_fn_chain(...)
[Legacy] Create an LLM chain that uses OpenAI functions.
chains.openai_functions.base.create_openai_fn_runnable(...)
Create a runnable sequence that uses OpenAI functions.
chains.openai_functions.base.create_structured_output_chain(...)
[Legacy] Create an LLMChain that uses an OpenAI function to get a structured output.
chains.openai_functions.base.create_structured_output_runnable(...)
Create a runnable that uses an OpenAI function to get a structured output.
chains.openai_functions.base.get_openai_output_parser(...)
Get the appropriate function output parser given the user functions.
chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm)
Create a citation fuzzy match chain.
chains.openai_functions.extraction.create_extraction_chain(...)
Creates a chain that extracts information from a passage.
chains.openai_functions.extraction.create_extraction_chain_pydantic(...)
Creates a chain that extracts information from a passage using pydantic schema.
chains.openai_functions.openapi.get_openapi_chain(spec)
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-12
|
chains.openai_functions.openapi.get_openapi_chain(spec)
Create a chain for querying an API from a OpenAPI spec.
chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec)
Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI
chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm)
Create a question answering chain that returns an answer with sources.
chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(...)
Create a question answering chain that returns an answer with sources
chains.openai_functions.tagging.create_tagging_chain(...)
Creates a chain that extracts information from a passage
chains.openai_functions.tagging.create_tagging_chain_pydantic(...)
Creates a chain that extracts information from a passage
chains.openai_functions.utils.get_llm_kwargs(...)
Returns the kwargs for the LLMChain constructor.
chains.openai_tools.extraction.create_extraction_chain_pydantic(...)
Creates a chain that extracts information from a passage.
chains.prompt_selector.is_chat_model(llm)
Check if the language model is a chat model.
chains.prompt_selector.is_llm(llm)
Check if the language model is a LLM.
chains.qa_with_sources.loading.load_qa_with_sources_chain(llm)
Load a question answering with sources chain.
chains.query_constructor.base.construct_examples(...)
Construct examples from input-output pairs.
chains.query_constructor.base.fix_filter_directive(...)
Fix invalid filter directive.
chains.query_constructor.base.get_query_constructor_prompt(...)
Create query construction prompt.
chains.query_constructor.base.load_query_constructor_chain(...)
Load a query constructor chain.
chains.query_constructor.base.load_query_constructor_runnable(...)
Load a query constructor runnable chain.
chains.query_constructor.parser.get_parser([...])
Returns a parser for the query language.
chains.query_constructor.parser.v_args(...)
Dummy decorator for when lark is not installed.
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-13
|
chains.query_constructor.parser.v_args(...)
Dummy decorator for when lark is not installed.
chains.sql_database.query.create_sql_query_chain(llm, db)
Create a chain that generates SQL queries.
langchain.embeddings¶
Embedding models are wrappers around embedding models
from different APIs and services.
Embedding models can be LLMs or not.
Class hierarchy:
Embeddings --> <name>Embeddings # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings
Classes¶
embeddings.cache.CacheBackedEmbeddings(...)
Interface for caching results from embedding models.
Functions¶
langchain.evaluation¶
Evaluation chains for grading LLM and Chain outputs.
This module contains off-the-shelf evaluation chains for grading the output of
LangChain primitives such as language models and chains.
Loading an evaluator
To load an evaluator, you can use the load_evaluators or
load_evaluator functions with the
names of the evaluators to load.
from langchain.evaluation import load_evaluator
evaluator = load_evaluator("qa")
evaluator.evaluate_strings(
prediction="We sold more than 40,000 units last week",
input="How many units did we sell last week?",
reference="We sold 32,378 units",
)
The evaluator must be one of EvaluatorType.
Datasets
To load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the
name of the dataset to load.
from langchain.evaluation import load_dataset
ds = load_dataset("llm-math")
Some common use cases for evaluation include:
Grading the accuracy of a response against ground truth answers: QAEvalChain
Comparing the output of two models: PairwiseStringEvalChain or LabeledPairwiseStringEvalChain when there is additionally a reference label.
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-14
|
Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain
Checking whether an output complies with a set of criteria: CriteriaEvalChain or LabeledCriteriaEvalChain when there is additionally a reference label.
Computing semantic difference between a prediction and reference: EmbeddingDistanceEvalChain or between two predictions: PairwiseEmbeddingDistanceEvalChain
Measuring the string distance between a prediction and reference StringDistanceEvalChain or between two predictions PairwiseStringDistanceEvalChain
Low-level API
These evaluators implement one of the following interfaces:
StringEvaluator: Evaluate a prediction string against a reference label and/or input context.
PairwiseStringEvaluator: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.
AgentTrajectoryEvaluator Evaluate the full sequence of actions taken by an agent.
These interfaces enable easier composability and usage within a higher level evaluation framework.
Classes¶
evaluation.agents.trajectory_eval_chain.TrajectoryEval
A named tuple containing the score and reasoning for a trajectory.
evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain
A chain for evaluating ReAct style agents.
evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser
Trajectory output parser.
evaluation.comparison.eval_chain.LabeledPairwiseStringEvalChain
A chain for comparing two outputs, such as the outputs
evaluation.comparison.eval_chain.PairwiseStringEvalChain
A chain for comparing two outputs, such as the outputs
evaluation.comparison.eval_chain.PairwiseStringResultOutputParser
A parser for the output of the PairwiseStringEvalChain.
evaluation.criteria.eval_chain.Criteria(value)
A Criteria to evaluate.
evaluation.criteria.eval_chain.CriteriaEvalChain
LLM Chain for evaluating runs against criteria.
evaluation.criteria.eval_chain.CriteriaResultOutputParser
A parser for the output of the CriteriaEvalChain.
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-15
|
A parser for the output of the CriteriaEvalChain.
evaluation.criteria.eval_chain.LabeledCriteriaEvalChain
Criteria evaluation chain that requires references.
evaluation.embedding_distance.base.EmbeddingDistance(value)
Embedding Distance Metric.
evaluation.embedding_distance.base.EmbeddingDistanceEvalChain
Use embedding distances to score semantic difference between a prediction and reference.
evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain
Use embedding distances to score semantic difference between two predictions.
evaluation.exact_match.base.ExactMatchStringEvaluator(*)
Compute an exact match between the prediction and the reference.
evaluation.parsing.base.JsonEqualityEvaluator([...])
Evaluates whether the prediction is equal to the reference after
evaluation.parsing.base.JsonValidityEvaluator(...)
Evaluates whether the prediction is valid JSON.
evaluation.parsing.json_distance.JsonEditDistanceEvaluator([...])
An evaluator that calculates the edit distance between JSON strings.
evaluation.parsing.json_schema.JsonSchemaEvaluator(...)
An evaluator that validates a JSON prediction against a JSON schema reference.
evaluation.qa.eval_chain.ContextQAEvalChain
LLM Chain for evaluating QA w/o GT based on context
evaluation.qa.eval_chain.CotQAEvalChain
LLM Chain for evaluating QA using chain of thought reasoning.
evaluation.qa.eval_chain.QAEvalChain
LLM Chain for evaluating question answering.
evaluation.qa.generate_chain.QAGenerateChain
LLM Chain for generating examples for question answering.
evaluation.regex_match.base.RegexMatchStringEvaluator(*)
Compute a regex match between the prediction and the reference.
evaluation.schema.AgentTrajectoryEvaluator()
Interface for evaluating agent trajectories.
evaluation.schema.EvaluatorType(value[, ...])
The types of the evaluators.
evaluation.schema.LLMEvalChain
A base class for evaluators that use an LLM.
evaluation.schema.PairwiseStringEvaluator()
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-16
|
evaluation.schema.PairwiseStringEvaluator()
Compare the output of two models (or two outputs of the same model).
evaluation.schema.StringEvaluator()
Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.
evaluation.scoring.eval_chain.LabeledScoreStringEvalChain
A chain for scoring the output of a model on a scale of 1-10.
evaluation.scoring.eval_chain.ScoreStringEvalChain
A chain for scoring on a scale of 1-10 the output of a model.
evaluation.scoring.eval_chain.ScoreStringResultOutputParser
A parser for the output of the ScoreStringEvalChain.
evaluation.string_distance.base.PairwiseStringDistanceEvalChain
Compute string edit distances between two predictions.
evaluation.string_distance.base.StringDistance(value)
Distance metric to use.
evaluation.string_distance.base.StringDistanceEvalChain
Compute string distances between the prediction and the reference.
Functions¶
evaluation.comparison.eval_chain.resolve_pairwise_criteria(...)
Resolve the criteria for the pairwise evaluator.
evaluation.criteria.eval_chain.resolve_criteria(...)
Resolve the criteria to evaluate.
evaluation.loading.load_dataset(uri)
Load a dataset from the LangChainDatasets on HuggingFace.
evaluation.loading.load_evaluator(evaluator, *)
Load the requested evaluation chain specified by a string.
evaluation.loading.load_evaluators(evaluators, *)
Load evaluators specified by a list of evaluator types.
evaluation.scoring.eval_chain.resolve_criteria(...)
Resolve the criteria for the pairwise evaluator.
langchain.hub¶
Interface with the LangChain Hub.
Functions¶
hub.pull(owner_repo_commit, *[, api_url, ...])
Pulls an object from the hub and returns it as a LangChain object.
hub.push(repo_full_name, object, *[, ...])
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-17
|
hub.push(repo_full_name, object, *[, ...])
Pushes an object to the hub and returns the URL it can be viewed at in a browser.
langchain.indexes¶
Code to support various indexing workflows.
Provides code to:
Create knowledge graphs from data.
Support indexing workflows from LangChain data loaders to vectorstores.
For indexing workflows, this code is used to avoid writing duplicated content
into the vectostore and to avoid over-writing content if it’s unchanged.
Importantly, this keeps on working even if the content being written is derived
via a set of transformations from some source content (e.g., indexing children
documents that were derived from parent documents by chunking.)
Classes¶
indexes.base.RecordManager(namespace)
An abstract base class representing the interface for a record manager.
indexes.graph.GraphIndexCreator
Functionality to create graph index.
indexes.vectorstore.VectorStoreIndexWrapper
Wrapper around a vectorstore for easy access.
indexes.vectorstore.VectorstoreIndexCreator
Logic for creating indexes.
Functions¶
langchain.memory¶
Memory maintains Chain state, incorporating context from past runs.
Class hierarchy for Memory:
BaseMemory --> BaseChatMemory --> <name>Memory # Examples: ZepMemory, MotorheadMemory
Main helpers:
BaseChatMessageHistory
Chat Message History stores the chat message history in different stores.
Class hierarchy for ChatMessageHistory:
BaseChatMessageHistory --> <name>ChatMessageHistory # Example: ZepChatMessageHistory
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
memory.buffer.ConversationBufferMemory
Buffer for storing conversation memory.
memory.buffer.ConversationStringBufferMemory
Buffer for storing conversation memory.
memory.buffer_window.ConversationBufferWindowMemory
Buffer for storing conversation memory inside a limited size window.
memory.chat_memory.BaseChatMemory
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-18
|
Buffer for storing conversation memory inside a limited size window.
memory.chat_memory.BaseChatMemory
Abstract base class for chat memory.
memory.combined.CombinedMemory
Combining multiple memories' data together.
memory.entity.BaseEntityStore
Abstract base class for Entity store.
memory.entity.ConversationEntityMemory
Entity extractor & summarizer memory.
memory.entity.InMemoryEntityStore
In-memory Entity store.
memory.entity.RedisEntityStore
Redis-backed Entity store.
memory.entity.SQLiteEntityStore
SQLite-backed Entity store
memory.entity.UpstashRedisEntityStore
Upstash Redis backed Entity store.
memory.kg.ConversationKGMemory
Knowledge graph conversation memory.
memory.motorhead_memory.MotorheadMemory
Chat message memory backed by Motorhead service.
memory.readonly.ReadOnlySharedMemory
A memory wrapper that is read-only and cannot be changed.
memory.simple.SimpleMemory
Simple memory for storing context or other information that shouldn't ever change between prompts.
memory.summary.ConversationSummaryMemory
Conversation summarizer to chat memory.
memory.summary.SummarizerMixin
Mixin for summarizer.
memory.summary_buffer.ConversationSummaryBufferMemory
Buffer with summarizer for storing conversation memory.
memory.token_buffer.ConversationTokenBufferMemory
Conversation chat memory with token limit.
memory.vectorstore.VectorStoreRetrieverMemory
VectorStoreRetriever-backed memory.
memory.zep_memory.ZepMemory
Persist your chain history to the Zep MemoryStore.
Functions¶
memory.utils.get_prompt_input_key(inputs, ...)
Get the prompt input key.
langchain.model_laboratory¶
Experiment with different models.
Classes¶
model_laboratory.ModelLaboratory(chains[, names])
Experiment with different models.
langchain.output_parsers¶
OutputParser classes parse the output of an LLM call.
Class hierarchy:
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-19
|
OutputParser classes parse the output of an LLM call.
Class hierarchy:
BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser # ListOutputParser, PydanticOutputParser
Main helpers:
Serializable, Generation, PromptValue
Classes¶
output_parsers.boolean.BooleanOutputParser
Parse the output of an LLM call to a boolean.
output_parsers.combining.CombiningOutputParser
Combine multiple output parsers into one.
output_parsers.datetime.DatetimeOutputParser
Parse the output of an LLM call to a datetime.
output_parsers.enum.EnumOutputParser
Parse an output that is one of a set of values.
output_parsers.ernie_functions.JsonKeyOutputFunctionsParser
Parse an output as the element of the Json object.
output_parsers.ernie_functions.JsonOutputFunctionsParser
Parse an output as the Json object.
output_parsers.ernie_functions.OutputFunctionsParser
Parse an output that is one of sets of values.
output_parsers.ernie_functions.PydanticAttrOutputFunctionsParser
Parse an output as an attribute of a pydantic object.
output_parsers.ernie_functions.PydanticOutputFunctionsParser
Parse an output as a pydantic object.
output_parsers.fix.OutputFixingParser
Wraps a parser and tries to fix parsing errors.
output_parsers.json.SimpleJsonOutputParser
Parse the output of an LLM call to a JSON object.
output_parsers.openai_functions.JsonKeyOutputFunctionsParser
Parse an output as the element of the Json object.
output_parsers.openai_functions.JsonOutputFunctionsParser
Parse an output as the Json object.
output_parsers.openai_functions.OutputFunctionsParser
Parse an output that is one of sets of values.
output_parsers.openai_functions.PydanticAttrOutputFunctionsParser
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-20
|
output_parsers.openai_functions.PydanticAttrOutputFunctionsParser
Parse an output as an attribute of a pydantic object.
output_parsers.openai_functions.PydanticOutputFunctionsParser
Parse an output as a pydantic object.
output_parsers.openai_tools.JsonOutputKeyToolsParser
Parse tools from OpenAI response.
output_parsers.openai_tools.JsonOutputToolsParser
Parse tools from OpenAI response.
output_parsers.openai_tools.PydanticToolsParser
Parse tools from OpenAI response.
output_parsers.pandas_dataframe.PandasDataFrameOutputParser
Parse an output using Pandas DataFrame format.
output_parsers.pydantic.PydanticOutputParser
Parse an output using a pydantic model.
output_parsers.rail_parser.GuardrailsOutputParser
Parse the output of an LLM call using Guardrails.
output_parsers.regex.RegexParser
Parse the output of an LLM call using a regex.
output_parsers.regex_dict.RegexDictParser
Parse the output of an LLM call into a Dictionary using a regex.
output_parsers.retry.RetryOutputParser
Wraps a parser and tries to fix parsing errors.
output_parsers.retry.RetryWithErrorOutputParser
Wraps a parser and tries to fix parsing errors.
output_parsers.structured.ResponseSchema
A schema for a response from a structured output parser.
output_parsers.structured.StructuredOutputParser
Parse the output of an LLM call to a structured output.
output_parsers.xml.XMLOutputParser
Parse an output using xml format.
output_parsers.yaml.YamlOutputParser
Parse YAML output using a pydantic model.
Functions¶
output_parsers.json.parse_and_check_json_markdown(...)
Parse a JSON string from a Markdown string and check that it contains the expected keys.
output_parsers.json.parse_json_markdown(...)
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-21
|
output_parsers.json.parse_json_markdown(...)
Parse a JSON string from a Markdown string.
output_parsers.json.parse_partial_json(s, *)
Parse a JSON string that may be missing closing braces.
output_parsers.loading.load_output_parser(config)
Load an output parser.
langchain.prompts¶
Prompt is the input to the model.
Prompt is often constructed
from multiple components. Prompt classes and functions make constructing
and working with prompts easy.
Class hierarchy:
BasePromptTemplate --> PipelinePromptTemplate
StringPromptTemplate --> PromptTemplate
FewShotPromptTemplate
FewShotPromptWithTemplates
BaseChatPromptTemplate --> AutoGPTPrompt
ChatPromptTemplate --> AgentScratchPadChatPromptTemplate
BaseMessagePromptTemplate --> MessagesPlaceholder
BaseStringMessagePromptTemplate --> ChatMessagePromptTemplate
HumanMessagePromptTemplate
AIMessagePromptTemplate
SystemMessagePromptTemplate
PromptValue --> StringPromptValue
ChatPromptValue
Classes¶
prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector
Select and order examples based on ngram overlap score (sentence_bleu score).
Functions¶
prompts.example_selector.ngram_overlap.ngram_overlap_score(...)
Compute ngram overlap score of source and example as sentence_bleu score.
langchain.retrievers¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to
store documents, only to return (or retrieve) it. Vector stores can be used as
the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever
Main helpers:
Document, Serializable, Callbacks,
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-22
|
Main helpers:
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes¶
retrievers.contextual_compression.ContextualCompressionRetriever
Retriever that wraps a base retriever and compresses the results.
retrievers.document_compressors.base.BaseDocumentCompressor
Base class for document compressors.
retrievers.document_compressors.base.DocumentCompressorPipeline
Document compressor that uses a pipeline of Transformers.
retrievers.document_compressors.chain_extract.LLMChainExtractor
Document compressor that uses an LLM chain to extract the relevant parts of documents.
retrievers.document_compressors.chain_extract.NoOutputParser
Parse outputs that could return a null string of some sort.
retrievers.document_compressors.chain_filter.LLMChainFilter
Filter that drops documents that aren't relevant to the query.
retrievers.document_compressors.cohere_rerank.CohereRerank
Document compressor that uses Cohere Rerank API.
retrievers.document_compressors.embeddings_filter.EmbeddingsFilter
Document compressor that uses embeddings to drop documents unrelated to the query.
retrievers.ensemble.EnsembleRetriever
Retriever that ensembles the multiple retrievers.
retrievers.merger_retriever.MergerRetriever
Retriever that merges the results of multiple retrievers.
retrievers.multi_query.LineList
List of lines.
retrievers.multi_query.LineListOutputParser
Output parser for a list of lines.
retrievers.multi_query.MultiQueryRetriever
Given a query, use an LLM to write a set of queries.
retrievers.multi_vector.MultiVectorRetriever
Retrieve from a set of multiple embeddings for the same document.
retrievers.multi_vector.SearchType(value[, ...])
Enumerator of the types of search to perform.
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-23
|
Enumerator of the types of search to perform.
retrievers.parent_document_retriever.ParentDocumentRetriever
Retrieve small chunks then retrieve their parent documents.
retrievers.re_phraser.RePhraseQueryRetriever
Given a query, use an LLM to re-phrase it.
retrievers.self_query.base.SelfQueryRetriever
Retriever that uses a vector store and an LLM to generate the vector store queries.
retrievers.self_query.chroma.ChromaTranslator()
Translate Chroma internal query language elements to valid filters.
retrievers.self_query.dashvector.DashvectorTranslator()
Logic for converting internal query language elements to valid filters.
retrievers.self_query.deeplake.DeepLakeTranslator()
Translate DeepLake internal query language elements to valid filters.
retrievers.self_query.elasticsearch.ElasticsearchTranslator()
Translate Elasticsearch internal query language elements to valid filters.
retrievers.self_query.milvus.MilvusTranslator()
Translate Milvus internal query language elements to valid filters.
retrievers.self_query.mongodb_atlas.MongoDBAtlasTranslator()
Translate Mongo internal query language elements to valid filters.
retrievers.self_query.myscale.MyScaleTranslator([...])
Translate MyScale internal query language elements to valid filters.
retrievers.self_query.opensearch.OpenSearchTranslator()
Translate OpenSearch internal query domain-specific language elements to valid filters.
retrievers.self_query.pinecone.PineconeTranslator()
Translate Pinecone internal query language elements to valid filters.
retrievers.self_query.qdrant.QdrantTranslator(...)
Translate Qdrant internal query language elements to valid filters.
retrievers.self_query.redis.RedisTranslator(schema)
Visitor for translating structured queries to Redis filter expressions.
retrievers.self_query.supabase.SupabaseVectorTranslator()
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-24
|
retrievers.self_query.supabase.SupabaseVectorTranslator()
Translate Langchain filters to Supabase PostgREST filters.
retrievers.self_query.timescalevector.TimescaleVectorTranslator()
Translate the internal query language elements to valid filters.
retrievers.self_query.vectara.VectaraTranslator()
Translate Vectara internal query language elements to valid filters.
retrievers.self_query.weaviate.WeaviateTranslator()
Translate Weaviate internal query language elements to valid filters.
retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever
Retriever that combines embedding similarity with recency in retrieving values.
retrievers.web_research.LineList
List of questions.
retrievers.web_research.QuestionListOutputParser
Output parser for a list of numbered questions.
retrievers.web_research.SearchQueries
Search queries to research for the user's goal.
retrievers.web_research.WebResearchRetriever
Google Search API retriever.
Functions¶
retrievers.document_compressors.chain_extract.default_get_input(...)
Return the compression chain input.
retrievers.document_compressors.chain_filter.default_get_input(...)
Return the compression chain input.
retrievers.self_query.deeplake.can_cast_to_float(string)
Check if a string can be cast to a float.
retrievers.self_query.milvus.process_value(value)
Convert a value to a string and add double quotes if it is a string.
retrievers.self_query.vectara.process_value(value)
Convert a value to a string and add single quotes if it is a string.
langchain.runnables¶
Classes¶
runnables.hub.HubRunnable
An instance of a runnable stored in the LangChain Hub.
runnables.openai_functions.OpenAIFunction
A function description for ChatOpenAI
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-25
|
runnables.openai_functions.OpenAIFunction
A function description for ChatOpenAI
runnables.openai_functions.OpenAIFunctionsRouter
A runnable that routes to the selected function.
langchain.smith¶
LangSmith utilities.
This module provides utilities for connecting to LangSmith. For more information on LangSmith, see the LangSmith documentation.
Evaluation
LangSmith helps you evaluate Chains and other language model application components using a number of LangChain evaluators.
An example of this is shown below, assuming you’ve created a LangSmith dataset called <my_dataset_name>:
from langsmith import Client
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, run_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = RunEvalConfig(
evaluators=[
"qa", # "Correctness" against a reference answer
"embedding_distance",
RunEvalConfig.Criteria("helpfulness"),
RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the
StringEvaluator
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-26
|
)
You can also create custom evaluators by subclassing the
StringEvaluator
or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluator
class MyStringEvaluator(StringEvaluator):
@property
def requires_input(self) -> bool:
return False
@property
def requires_reference(self) -> bool:
return True
@property
def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
return {"score": prediction == reference}
evaluation_config = RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
Primary Functions
arun_on_dataset: Asynchronous function to evaluate a chain, agent, or other LangChain component over a dataset.
run_on_dataset: Function to evaluate a chain, agent, or other LangChain component over a dataset.
RunEvalConfig: Class representing the configuration for running evaluation. You can select evaluators by EvaluatorType or config, or you can pass in custom_evaluators
Classes¶
smith.evaluation.config.EvalConfig
Configuration for a given run evaluator.
smith.evaluation.config.RunEvalConfig
Configuration for a run evaluation.
smith.evaluation.config.SingleKeyEvalConfig
Configuration for a run evaluator that only requires a single key.
smith.evaluation.progress.ProgressBarCallback(total)
A simple progress bar for the console.
smith.evaluation.runner_utils.EvalError(...)
Your architecture raised an error.
smith.evaluation.runner_utils.InputFormatError
Raised when the input format is invalid.
smith.evaluation.runner_utils.TestResult
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-27
|
Raised when the input format is invalid.
smith.evaluation.runner_utils.TestResult
A dictionary of the results of a single test run.
smith.evaluation.string_run_evaluator.ChainStringRunMapper
Extract items to evaluate from the run object from a chain.
smith.evaluation.string_run_evaluator.LLMStringRunMapper
Extract items to evaluate from the run object.
smith.evaluation.string_run_evaluator.StringExampleMapper
Map an example, or row in the dataset, to the inputs of an evaluation.
smith.evaluation.string_run_evaluator.StringRunEvaluatorChain
Evaluate Run and optional examples.
smith.evaluation.string_run_evaluator.StringRunMapper
Extract items to evaluate from the run object.
smith.evaluation.string_run_evaluator.ToolStringRunMapper
Map an input to the tool.
Functions¶
smith.evaluation.name_generation.random_name()
Generate a random name.
smith.evaluation.runner_utils.arun_on_dataset(...)
Run the Chain or language model on a dataset and store traces to the specified project name.
smith.evaluation.runner_utils.run_on_dataset(...)
Run the Chain or language model on a dataset and store traces to the specified project name.
langchain.storage¶
Implementations of key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform
to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Classes¶
storage.encoder_backed.EncoderBackedStore(...)
Wraps a store with key and value encoders/decoders.
storage.file_system.LocalFileStore(root_path)
BaseStore interface that works on the local file system.
storage.in_memory.InMemoryBaseStore()
In-memory implementation of the BaseStore using a dictionary.
langchain.text_splitter¶
Text Splitters are classes for splitting text.
Class hierarchy:
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-28
|
Text Splitters are classes for splitting text.
Class hierarchy:
BaseDocumentTransformer --> TextSplitter --> <name>TextSplitter # Example: CharacterTextSplitter
RecursiveCharacterTextSplitter --> <name>TextSplitter
Note: MarkdownHeaderTextSplitter and **HTMLHeaderTextSplitter do not derive from TextSplitter.
Main helpers:
Document, Tokenizer, Language, LineType, HeaderType
Classes¶
text_splitter.CharacterTextSplitter([...])
Splitting text that looks at characters.
text_splitter.ElementType
Element type as typed dict.
text_splitter.HTMLHeaderTextSplitter(...[, ...])
Splitting HTML files based on specified headers.
text_splitter.HeaderType
Header type as typed dict.
text_splitter.Language(value[, names, ...])
Enum of the programming languages.
text_splitter.LatexTextSplitter(**kwargs)
Attempts to split the text along Latex-formatted layout elements.
text_splitter.LineType
Line type as typed dict.
text_splitter.MarkdownHeaderTextSplitter(...)
Splitting markdown files based on specified headers.
text_splitter.MarkdownTextSplitter(**kwargs)
Attempts to split the text along Markdown-formatted headings.
text_splitter.NLTKTextSplitter([separator, ...])
Splitting text using NLTK package.
text_splitter.PythonCodeTextSplitter(**kwargs)
Attempts to split the text along Python syntax.
text_splitter.RecursiveCharacterTextSplitter([...])
Splitting text by recursively look at characters.
text_splitter.SentenceTransformersTokenTextSplitter([...])
Splitting text to tokens using sentence model tokenizer.
text_splitter.SpacyTextSplitter([separator, ...])
Splitting text using Spacy package.
text_splitter.TextSplitter(chunk_size, ...)
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-29
|
text_splitter.TextSplitter(chunk_size, ...)
Interface for splitting text into chunks.
text_splitter.TokenTextSplitter([...])
Splitting text to tokens using model tokenizer.
text_splitter.Tokenizer(chunk_overlap, ...)
Tokenizer data class.
Functions¶
text_splitter.split_text_on_tokens(*, text, ...)
Split incoming text and return chunks using tokenizer.
langchain.tools¶
Tools are classes that an Agent uses to interact with the world.
Each tool has a description. Agent uses the description to choose the right
tool for the job.
Class hierarchy:
ToolMetaclass --> BaseTool --> <name>Tool # Examples: AIPluginTool, BaseGraphQLTool
<name> # Examples: BraveSearch, HumanInputRun
Main helpers:
CallbackManagerForToolRun, AsyncCallbackManagerForToolRun
Classes¶
tools.retriever.RetrieverInput
Input to the retriever.
Functions¶
tools.render.render_text_description(tools)
Render the tool name and description in plain text.
tools.render.render_text_description_and_args(tools)
Render the tool name, description, and args in plain text.
tools.retriever.create_retriever_tool(...)
Create a tool to do retrieval of documents.
langchain.utils¶
Utility functions for LangChain.
These functions do not depend on any other LangChain module.
Classes¶
utils.ernie_functions.FunctionDescription
Representation of a callable function to the Ernie API.
utils.ernie_functions.ToolDescription
Representation of a callable function to the Ernie API.
Functions¶
utils.ernie_functions.convert_pydantic_to_ernie_function(...)
Converts a Pydantic model to a function description for the Ernie API.
utils.ernie_functions.convert_pydantic_to_ernie_tool(...)
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
970383df46ad-30
|
utils.ernie_functions.convert_pydantic_to_ernie_tool(...)
Converts a Pydantic model to a function description for the Ernie API.
|
https://api.python.langchain.com/en/latest/langchain_api_reference.html
|
d43954e3e5c4-0
|
langchain_google_genai 0.0.5¶
langchain_google_genai.chat_models¶
Classes¶
chat_models.ChatGoogleGenerativeAI
Google Generative AI Chat models API.
chat_models.ChatGoogleGenerativeAIError
Custom exception class for errors associated with the Google GenAI API.
Functions¶
langchain_google_genai.embeddings¶
Classes¶
embeddings.GoogleGenerativeAIEmbeddings
Google Generative AI Embeddings.
langchain_google_genai.llms¶
Classes¶
llms.GoogleGenerativeAI
Google GenerativeAI models.
Functions¶
|
https://api.python.langchain.com/en/latest/google_genai_api_reference.html
|
deec08ff2798-0
|
langchain_core.beta.runnables.context.config_with_context¶
langchain_core.beta.runnables.context.config_with_context(config: RunnableConfig, steps: List[Runnable]) → RunnableConfig[source]¶
Patch a runnable config with context getters and setters.
Parameters
config – The runnable config.
steps – The runnable steps.
Returns
The patched runnable config.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.config_with_context.html
|
55b5f4572a1d-0
|
langchain_core.beta.runnables.context.aconfig_with_context¶
langchain_core.beta.runnables.context.aconfig_with_context(config: RunnableConfig, steps: List[Runnable]) → RunnableConfig[source]¶
Asynchronously patch a runnable config with context getters and setters.
Parameters
config – The runnable config.
steps – The runnable steps.
Returns
The patched runnable config.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.aconfig_with_context.html
|
8b99102b5a98-0
|
langchain_core.beta.runnables.context.Context¶
class langchain_core.beta.runnables.context.Context[source]¶
Context for a runnable.
Methods
__init__()
create_scope(scope, /)
Create a context scope.
getter(key, /)
setter([_key, _value])
__init__()¶
static create_scope(scope: str, /) → PrefixContext[source]¶
Create a context scope.
Parameters
scope – The scope.
Returns
The context scope.
static getter(key: Union[str, List[str]], /) → ContextGet[source]¶
static setter(_key: Optional[str] = None, _value: Optional[Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], Awaitable[Output]], Any]] = None, /, **kwargs: Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], Awaitable[Output]], Any]) → ContextSet[source]¶
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.Context.html
|
e5e6fa214307-0
|
langchain_core.beta.runnables.context.PrefixContext¶
class langchain_core.beta.runnables.context.PrefixContext(prefix: str = '')[source]¶
Context for a runnable with a prefix.
Attributes
prefix
Methods
__init__([prefix])
getter(key, /)
setter([_key, _value])
__init__(prefix: str = '')[source]¶
getter(key: Union[str, List[str]], /) → ContextGet[source]¶
setter(_key: Optional[str] = None, _value: Optional[Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], Awaitable[Output]], Any]] = None, /, **kwargs: Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], Awaitable[Output]], Any]) → ContextSet[source]¶
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.PrefixContext.html
|
8366d6b4638e-0
|
langchain_core.beta.runnables.context.ContextGet¶
class langchain_core.beta.runnables.context.ContextGet[source]¶
Bases: RunnableSerializable
Get a context value.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param key: Union[str, List[str]] [Required]¶
param prefix: str = ''¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Any, config: Optional[RunnableConfig] = None, **kwargs: Any) → Any[source]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
8366d6b4638e-1
|
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
8366d6b4638e-2
|
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
8366d6b4638e-3
|
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
8366d6b4638e-4
|
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Any, config: Optional[RunnableConfig] = None) → Any[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
8366d6b4638e-5
|
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
8366d6b4638e-6
|
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
8366d6b4638e-7
|
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain_core.runnables.utils.Output]¶
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
8366d6b4638e-8
|
property OutputType: Type[langchain_core.runnables.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property ids: List[str]¶
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html
|
61ae56e09c1f-0
|
langchain_core.beta.runnables.context.ContextSet¶
class langchain_core.beta.runnables.context.ContextSet[source]¶
Bases: RunnableSerializable
Set a context value.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param keys: Mapping[str, Optional[langchain_core.runnables.base.Runnable]] [Required]¶
param prefix: str = ''¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Any, config: Optional[RunnableConfig] = None, **kwargs: Any) → Any[source]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
61ae56e09c1f-1
|
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
61ae56e09c1f-2
|
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
61ae56e09c1f-3
|
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
61ae56e09c1f-4
|
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Any, config: Optional[RunnableConfig] = None) → Any[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
61ae56e09c1f-5
|
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
61ae56e09c1f-6
|
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
61ae56e09c1f-7
|
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain_core.runnables.utils.Output]¶
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
61ae56e09c1f-8
|
property OutputType: Type[langchain_core.runnables.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property ids: List[str]¶
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
|
https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html
|
449eb2572187-0
|
langchain_community.chat_loaders.gmail.GMailLoader¶
class langchain_community.chat_loaders.gmail.GMailLoader(creds: Any, n: int = 100, raise_error: bool = False)[source]¶
Load data from GMail.
There are many ways you could want to load data from GMail.
This loader is currently fairly opinionated in how to do so.
The way it does it is it first looks for all messages that you have sent.
It then looks for messages where you are responding to a previous email.
It then fetches that previous email, and creates a training example
of that email, followed by your email.
Note that there are clear limitations here. For example,
all examples created are only looking at the previous email for context.
To use:
Set up a Google Developer Account:Go to the Google Developer Console, create a project,
and enable the Gmail API for that project.
This will give you a credentials.json file that you’ll need later.
Methods
__init__(creds[, n, raise_error])
lazy_load()
Lazy load the chat sessions.
load()
Eagerly load the chat sessions into memory.
__init__(creds: Any, n: int = 100, raise_error: bool = False) → None[source]¶
lazy_load() → Iterator[ChatSession][source]¶
Lazy load the chat sessions.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Examples using GMailLoader¶
GMail
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.gmail.GMailLoader.html
|
3339bf5ba123-0
|
langchain_community.chat_loaders.imessage.IMessageChatLoader¶
class langchain_community.chat_loaders.imessage.IMessageChatLoader(path: Optional[Union[str, Path]] = None)[source]¶
Load chat sessions from the iMessage chat.db SQLite file.
It only works on macOS when you have iMessage enabled and have the chat.db file.
The chat.db file is likely located at ~/Library/Messages/chat.db. However, your
terminal may not have permission to access this file. To resolve this, you can
copy the file to a different location, change the permissions of the file, or
grant full disk access for your terminal emulator
in System Settings > Security and Privacy > Full Disk Access.
Initialize the IMessageChatLoader.
Parameters
path (str or Path, optional) – Path to the chat.db SQLite file.
Defaults to None, in which case the default path
~/Library/Messages/chat.db will be used.
Methods
__init__([path])
Initialize the IMessageChatLoader.
lazy_load()
Lazy load the chat sessions from the iMessage chat.db and yield them in the required format.
load()
Eagerly load the chat sessions into memory.
__init__(path: Optional[Union[str, Path]] = None)[source]¶
Initialize the IMessageChatLoader.
Parameters
path (str or Path, optional) – Path to the chat.db SQLite file.
Defaults to None, in which case the default path
~/Library/Messages/chat.db will be used.
lazy_load() → Iterator[ChatSession][source]¶
Lazy load the chat sessions from the iMessage chat.db
and yield them in the required format.
Yields
ChatSession – Loaded chat session.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Examples using IMessageChatLoader¶
iMessage
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.imessage.IMessageChatLoader.html
|
38b7a13583ea-0
|
langchain_community.chat_loaders.facebook_messenger.SingleFileFacebookMessengerChatLoader¶
class langchain_community.chat_loaders.facebook_messenger.SingleFileFacebookMessengerChatLoader(path: Union[Path, str])[source]¶
Load Facebook Messenger chat data from a single file.
Parameters
path (Union[Path, str]) – The path to the chat file.
path¶
The path to the chat file.
Type
Path
Methods
__init__(path)
lazy_load()
Lazy loads the chat data from the file.
load()
Eagerly load the chat sessions into memory.
__init__(path: Union[Path, str]) → None[source]¶
lazy_load() → Iterator[ChatSession][source]¶
Lazy loads the chat data from the file.
Yields
ChatSession – A chat session containing the loaded messages.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Examples using SingleFileFacebookMessengerChatLoader¶
Facebook Messenger
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.facebook_messenger.SingleFileFacebookMessengerChatLoader.html
|
4955a9a1a117-0
|
langchain_community.chat_loaders.facebook_messenger.FolderFacebookMessengerChatLoader¶
class langchain_community.chat_loaders.facebook_messenger.FolderFacebookMessengerChatLoader(path: Union[str, Path])[source]¶
Load Facebook Messenger chat data from a folder.
Parameters
path (Union[str, Path]) – The path to the directory
containing the chat files.
path¶
The path to the directory containing the chat files.
Type
Path
Methods
__init__(path)
lazy_load()
Lazy loads the chat data from the folder.
load()
Eagerly load the chat sessions into memory.
__init__(path: Union[str, Path]) → None[source]¶
lazy_load() → Iterator[ChatSession][source]¶
Lazy loads the chat data from the folder.
Yields
ChatSession – A chat session containing the loaded messages.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Examples using FolderFacebookMessengerChatLoader¶
Facebook Messenger
Chat loaders
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.facebook_messenger.FolderFacebookMessengerChatLoader.html
|
2a84735dc338-0
|
langchain_community.chat_loaders.utils.map_ai_messages_in_session¶
langchain_community.chat_loaders.utils.map_ai_messages_in_session(chat_sessions: ChatSession, sender: str) → ChatSession[source]¶
Convert messages from the specified ‘sender’ to AI messages.
This is useful for fine-tuning the AI to adapt to your voice.
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.utils.map_ai_messages_in_session.html
|
e242cdd524e7-0
|
langchain_community.chat_loaders.slack.SlackChatLoader¶
class langchain_community.chat_loaders.slack.SlackChatLoader(path: Union[str, Path])[source]¶
Load Slack conversations from a dump zip file.
Initialize the chat loader with the path to the exported Slack dump zip file.
Parameters
path – Path to the exported Slack dump zip file.
Methods
__init__(path)
Initialize the chat loader with the path to the exported Slack dump zip file.
lazy_load()
Lazy load the chat sessions from the Slack dump file and yield them in the required format.
load()
Eagerly load the chat sessions into memory.
__init__(path: Union[str, Path])[source]¶
Initialize the chat loader with the path to the exported Slack dump zip file.
Parameters
path – Path to the exported Slack dump zip file.
lazy_load() → Iterator[ChatSession][source]¶
Lazy load the chat sessions from the Slack dump file and yield them
in the required format.
Returns
Iterator of chat sessions containing messages.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Examples using SlackChatLoader¶
Slack
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.slack.SlackChatLoader.html
|
ef43dde2ea3b-0
|
langchain_community.chat_loaders.telegram.TelegramChatLoader¶
class langchain_community.chat_loaders.telegram.TelegramChatLoader(path: Union[str, Path])[source]¶
Load telegram conversations to LangChain chat messages.
To export, use the Telegram Desktop app from
https://desktop.telegram.org/, select a conversation, click the three dots
in the top right corner, and select “Export chat history”. Then select
“Machine-readable JSON” (preferred) to export. Note: the ‘lite’ versions of
the desktop app (like “Telegram for MacOS”) do not support exporting chat
history.
Initialize the TelegramChatLoader.
Parameters
path (Union[str, Path]) – Path to the exported Telegram chat zip,
directory, json, or HTML file.
Methods
__init__(path)
Initialize the TelegramChatLoader.
lazy_load()
Lazy load the messages from the chat file and yield them in as chat sessions.
load()
Eagerly load the chat sessions into memory.
__init__(path: Union[str, Path])[source]¶
Initialize the TelegramChatLoader.
Parameters
path (Union[str, Path]) – Path to the exported Telegram chat zip,
directory, json, or HTML file.
lazy_load() → Iterator[ChatSession][source]¶
Lazy load the messages from the chat file and yield them
in as chat sessions.
Yields
ChatSession – The loaded chat session.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Examples using TelegramChatLoader¶
Telegram
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.telegram.TelegramChatLoader.html
|
61a86198312b-0
|
langchain_community.chat_loaders.utils.merge_chat_runs¶
langchain_community.chat_loaders.utils.merge_chat_runs(chat_sessions: Iterable[ChatSession]) → Iterator[ChatSession][source]¶
Merge chat runs together.
A chat run is a sequence of messages from the same sender.
Parameters
chat_sessions – A list of chat sessions.
Returns
A list of chat sessions with merged chat runs.
Examples using merge_chat_runs¶
Facebook Messenger
Slack
WhatsApp
iMessage
Telegram
Discord
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.utils.merge_chat_runs.html
|
ce8259db4396-0
|
langchain_community.chat_loaders.whatsapp.WhatsAppChatLoader¶
class langchain_community.chat_loaders.whatsapp.WhatsAppChatLoader(path: str)[source]¶
Load WhatsApp conversations from a dump zip file or directory.
Initialize the WhatsAppChatLoader.
Parameters
path (str) – Path to the exported WhatsApp chat
zip directory, folder, or file.
To generate the dump, open the chat, click the three dots in the top
right corner, and select “More”. Then select “Export chat” and
choose “Without media”.
Methods
__init__(path)
Initialize the WhatsAppChatLoader.
lazy_load()
Lazy load the messages from the chat file and yield them as chat sessions.
load()
Eagerly load the chat sessions into memory.
__init__(path: str)[source]¶
Initialize the WhatsAppChatLoader.
Parameters
path (str) – Path to the exported WhatsApp chat
zip directory, folder, or file.
To generate the dump, open the chat, click the three dots in the top
right corner, and select “More”. Then select “Export chat” and
choose “Without media”.
lazy_load() → Iterator[ChatSession][source]¶
Lazy load the messages from the chat file and yield
them as chat sessions.
Yields
Iterator[ChatSession] – The loaded chat sessions.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
Examples using WhatsAppChatLoader¶
WhatsApp
WhatsApp Chat
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.whatsapp.WhatsAppChatLoader.html
|
d667e816b416-0
|
langchain_community.chat_loaders.base.BaseChatLoader¶
class langchain_community.chat_loaders.base.BaseChatLoader[source]¶
Base class for chat loaders.
Methods
__init__()
lazy_load()
Lazy load the chat sessions.
load()
Eagerly load the chat sessions into memory.
__init__()¶
abstract lazy_load() → Iterator[ChatSession][source]¶
Lazy load the chat sessions.
load() → List[ChatSession][source]¶
Eagerly load the chat sessions into memory.
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.base.BaseChatLoader.html
|
d4b70b1721e5-0
|
langchain_community.chat_loaders.langsmith.LangSmithDatasetChatLoader¶
class langchain_community.chat_loaders.langsmith.LangSmithDatasetChatLoader(*, dataset_name: str, client: Optional['Client'] = None)[source]¶
Load chat sessions from a LangSmith dataset with the “chat” data type.
dataset_name¶
The name of the LangSmith dataset.
Type
str
client¶
Instance of LangSmith client for fetching data.
Type
Client
Initialize a new LangSmithChatDatasetLoader instance.
Parameters
dataset_name – The name of the LangSmith dataset.
client – An instance of LangSmith client; if not provided,
a new client instance will be created.
Methods
__init__(*, dataset_name[, client])
Initialize a new LangSmithChatDatasetLoader instance.
lazy_load()
Lazy load the chat sessions from the specified LangSmith dataset.
load()
Eagerly load the chat sessions into memory.
__init__(*, dataset_name: str, client: Optional['Client'] = None)[source]¶
Initialize a new LangSmithChatDatasetLoader instance.
Parameters
dataset_name – The name of the LangSmith dataset.
client – An instance of LangSmith client; if not provided,
a new client instance will be created.
lazy_load() → Iterator[ChatSession][source]¶
Lazy load the chat sessions from the specified LangSmith dataset.
This method fetches the chat data from the dataset and
converts each data point to chat sessions on-the-fly,
yielding one session at a time.
Returns
Iterator of chat sessions containing messages.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.langsmith.LangSmithDatasetChatLoader.html
|
484c7b25beb1-0
|
langchain_community.chat_loaders.utils.map_ai_messages¶
langchain_community.chat_loaders.utils.map_ai_messages(chat_sessions: Iterable[ChatSession], sender: str) → Iterator[ChatSession][source]¶
Convert messages from the specified ‘sender’ to AI messages.
This is useful for fine-tuning the AI to adapt to your voice.
Examples using map_ai_messages¶
Facebook Messenger
GMail
Slack
WhatsApp
iMessage
Telegram
Discord
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.utils.map_ai_messages.html
|
bfe1c85e61bf-0
|
langchain_community.chat_loaders.utils.merge_chat_runs_in_session¶
langchain_community.chat_loaders.utils.merge_chat_runs_in_session(chat_session: ChatSession, delimiter: str = '\n\n') → ChatSession[source]¶
Merge chat runs together in a chat session.
A chat run is a sequence of messages from the same sender.
Parameters
chat_session – A chat session.
Returns
A chat session with merged chat runs.
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.utils.merge_chat_runs_in_session.html
|
b93098918eea-0
|
langchain_community.chat_loaders.langsmith.LangSmithRunChatLoader¶
class langchain_community.chat_loaders.langsmith.LangSmithRunChatLoader(runs: Iterable[Union[str, Run]], client: Optional['Client'] = None)[source]¶
Load chat sessions from a list of LangSmith “llm” runs.
runs¶
The list of LLM run IDs or run objects.
Type
Iterable[Union[str, Run]]
client¶
Instance of LangSmith client for fetching data.
Type
Client
Initialize a new LangSmithRunChatLoader instance.
Parameters
runs – List of LLM run IDs or run objects.
client – An instance of LangSmith client, if not provided,
a new client instance will be created.
Methods
__init__(runs[, client])
Initialize a new LangSmithRunChatLoader instance.
lazy_load()
Lazy load the chat sessions from the iterable of run IDs.
load()
Eagerly load the chat sessions into memory.
__init__(runs: Iterable[Union[str, Run]], client: Optional['Client'] = None)[source]¶
Initialize a new LangSmithRunChatLoader instance.
Parameters
runs – List of LLM run IDs or run objects.
client – An instance of LangSmith client, if not provided,
a new client instance will be created.
lazy_load() → Iterator[ChatSession][source]¶
Lazy load the chat sessions from the iterable of run IDs.
This method fetches the runs and converts them to chat sessions on-the-fly,
yielding one session at a time.
Returns
Iterator of chat sessions containing messages.
load() → List[ChatSession]¶
Eagerly load the chat sessions into memory.
|
https://api.python.langchain.com/en/latest/chat_loaders/langchain_community.chat_loaders.langsmith.LangSmithRunChatLoader.html
|
9fa96a75ce02-0
|
langchain.retrievers.self_query.redis.RedisTranslator¶
class langchain.retrievers.self_query.redis.RedisTranslator(schema: RedisModel)[source]¶
Visitor for translating structured queries to Redis filter expressions.
Attributes
allowed_comparators
Subset of allowed logical comparators.
allowed_operators
Subset of allowed logical operators.
Methods
__init__(schema)
from_vectorstore(vectorstore)
visit_comparison(comparison)
Translate a Comparison.
visit_operation(operation)
Translate an Operation.
visit_structured_query(structured_query)
Translate a StructuredQuery.
__init__(schema: RedisModel) → None[source]¶
classmethod from_vectorstore(vectorstore: Redis) → RedisTranslator[source]¶
visit_comparison(comparison: Comparison) → RedisFilterExpression[source]¶
Translate a Comparison.
visit_operation(operation: Operation) → Any[source]¶
Translate an Operation.
visit_structured_query(structured_query: StructuredQuery) → Tuple[str, dict][source]¶
Translate a StructuredQuery.
|
https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.redis.RedisTranslator.html
|
4615158456a6-0
|
langchain_community.retrievers.svm.create_index¶
langchain_community.retrievers.svm.create_index(contexts: List[str], embeddings: Embeddings) → ndarray[source]¶
Create an index of embeddings for a list of contexts.
Parameters
contexts – List of contexts to embed.
embeddings – Embeddings model to use.
Returns
Index of embeddings.
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.svm.create_index.html
|
24bfc924e265-0
|
langchain_community.retrievers.kendra.DocumentAttribute¶
class langchain_community.retrievers.kendra.DocumentAttribute[source]¶
Bases: BaseModel
Document attribute.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param Key: str [Required]¶
The key of the attribute.
param Value: langchain_community.retrievers.kendra.DocumentAttributeValue [Required]¶
The value of the attribute.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.kendra.DocumentAttribute.html
|
24bfc924e265-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.kendra.DocumentAttribute.html
|
24bfc924e265-2
|
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.kendra.DocumentAttribute.html
|
b876a7bb1be5-0
|
langchain_community.retrievers.kendra.ResultItem¶
class langchain_community.retrievers.kendra.ResultItem[source]¶
Bases: BaseModel, ABC
Base class of a result item.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param DocumentAttributes: Optional[List[langchain_community.retrievers.kendra.DocumentAttribute]] = []¶
The document attributes.
param DocumentId: Optional[str] = None¶
The document ID.
param DocumentURI: Optional[str] = None¶
The document URI.
param Id: Optional[str] = None¶
The ID of the relevant result item.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.kendra.ResultItem.html
|
b876a7bb1be5-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_additional_metadata() → dict[source]¶
Document additional metadata dict.
This returns any extra metadata except these:
result_id
document_id
source
title
excerpt
document_attributes
get_document_attributes_dict() → Dict[str, Optional[Union[str, int, List[str]]]][source]¶
Document attributes dict.
abstract get_excerpt() → str[source]¶
Document excerpt or passage original content as retrieved by Kendra.
abstract get_title() → str[source]¶
Document title.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.kendra.ResultItem.html
|
b876a7bb1be5-2
|
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_doc(page_content_formatter: ~typing.Callable[[~langchain_community.retrievers.kendra.ResultItem], str] = <function combined_text>) → Document[source]¶
Converts this item to a Document.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.kendra.ResultItem.html
|
d66f44758f4d-0
|
langchain.retrievers.self_query.opensearch.OpenSearchTranslator¶
class langchain.retrievers.self_query.opensearch.OpenSearchTranslator[source]¶
Translate OpenSearch internal query domain-specific
language elements to valid filters.
Attributes
allowed_comparators
Subset of allowed logical comparators.
allowed_operators
Subset of allowed logical operators.
Methods
__init__()
visit_comparison(comparison)
Translate a Comparison.
visit_operation(operation)
Translate an Operation.
visit_structured_query(structured_query)
Translate a StructuredQuery.
__init__()¶
visit_comparison(comparison: Comparison) → Dict[source]¶
Translate a Comparison.
visit_operation(operation: Operation) → Dict[source]¶
Translate an Operation.
visit_structured_query(structured_query: StructuredQuery) → Tuple[str, dict][source]¶
Translate a StructuredQuery.
|
https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.opensearch.OpenSearchTranslator.html
|
75dfaae73b75-0
|
langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever¶
class langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever[source]¶
Bases: BaseRetriever
Weaviate hybrid search retriever.
See the documentation:https://weaviate.io/blog/hybrid-search-explained
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param alpha: float = 0.5¶
The weight of the text key in the hybrid search.
param attributes: List[str] [Required]¶
The attributes to return in the results.
param client: Any = None¶
keyword arguments to pass to the Weaviate client.
param create_schema_if_missing: bool = True¶
Whether to create the schema if it doesn’t exist.
param index_name: str [Required]¶
The name of the index to use.
param k: int = 4¶
The number of results to return.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the retriever. Defaults to None
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a retriever with its
use case.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the retriever. Defaults to None
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a retriever with its
use case.
param text_key: str [Required]¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-1
|
use case.
param text_key: str [Required]¶
The name of the text key to use.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
add_documents(docs: List[Document], **kwargs: Any) → List[str][source]¶
Upload documents to Weaviate.
async aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → List[Document]¶
Asynchronously get documents relevant to a query.
:param query: string to find relevant documents for
:param callbacks: Callback manager or list of callbacks
:param tags: Optional list of tags associated with the retriever. Defaults to None
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Parameters
metadata – Optional metadata associated with the retriever. Defaults to None
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Returns
List of relevant documents
async ainvoke(input: str, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → List[Document]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-2
|
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-3
|
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-4
|
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-5
|
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → List[Document]¶
Retrieve documents relevant to a query.
:param query: string to find relevant documents for
:param callbacks: Callback manager or list of callbacks
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-6
|
:param query: string to find relevant documents for
:param callbacks: Callback manager or list of callbacks
:param tags: Optional list of tags associated with the retriever. Defaults to None
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Parameters
metadata – Optional metadata associated with the retriever. Defaults to None
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Returns
List of relevant documents
invoke(input: str, config: Optional[RunnableConfig] = None) → List[Document]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-7
|
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-8
|
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-9
|
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain_core.runnables.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
75dfaae73b75-10
|
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using WeaviateHybridSearchRetriever¶
Weaviate Hybrid Search
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever.html
|
c9754a85d0dd-0
|
langchain.retrievers.self_query.pinecone.PineconeTranslator¶
class langchain.retrievers.self_query.pinecone.PineconeTranslator[source]¶
Translate Pinecone internal query language elements to valid filters.
Attributes
allowed_comparators
Subset of allowed logical comparators.
allowed_operators
Subset of allowed logical operators.
Methods
__init__()
visit_comparison(comparison)
Translate a Comparison.
visit_operation(operation)
Translate an Operation.
visit_structured_query(structured_query)
Translate a StructuredQuery.
__init__()¶
visit_comparison(comparison: Comparison) → Dict[source]¶
Translate a Comparison.
visit_operation(operation: Operation) → Dict[source]¶
Translate an Operation.
visit_structured_query(structured_query: StructuredQuery) → Tuple[str, dict][source]¶
Translate a StructuredQuery.
|
https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.pinecone.PineconeTranslator.html
|
985768d3e441-0
|
langchain.retrievers.self_query.vectara.process_value¶
langchain.retrievers.self_query.vectara.process_value(value: Union[int, float, str]) → str[source]¶
Convert a value to a string and add single quotes if it is a string.
|
https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.vectara.process_value.html
|
a2675babdd13-0
|
langchain_community.retrievers.google_vertex_ai_search.GoogleCloudEnterpriseSearchRetriever¶
class langchain_community.retrievers.google_vertex_ai_search.GoogleCloudEnterpriseSearchRetriever[source]¶
Bases: GoogleVertexAISearchRetriever
Google Vertex Search API retriever alias for backwards compatibility.
DEPRECATED: Use GoogleVertexAISearchRetriever instead.
Initializes private fields.
param credentials: Any = None¶
The default custom credentials (google.auth.credentials.Credentials) to use
when making API calls. If not provided, credentials will be ascertained from
the environment.
param data_store_id: str [Required]¶
Vertex AI Search data store ID.
param engine_data_type: int = 0¶
Defines the Vertex AI Search data type
0 - Unstructured data
1 - Structured data
2 - Website data
Constraints
minimum = 0
maximum = 2
param filter: Optional[str] = None¶
Filter expression.
param get_extractive_answers: bool = False¶
If True return Extractive Answers, otherwise return Extractive Segments or Snippets.
param location_id: str = 'global'¶
Vertex AI Search data store location.
param max_documents: int = 5¶
The maximum number of documents to return.
Constraints
minimum = 1
maximum = 100
param max_extractive_answer_count: int = 1¶
The maximum number of extractive answers returned in each search result.
At most 5 answers will be returned for each SearchResult.
Constraints
minimum = 1
maximum = 5
param max_extractive_segment_count: int = 1¶
The maximum number of extractive segments returned in each search result.
Currently one segment will be returned for each SearchResult.
Constraints
minimum = 1
maximum = 1
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.google_vertex_ai_search.GoogleCloudEnterpriseSearchRetriever.html
|
a2675babdd13-1
|
Constraints
minimum = 1
maximum = 1
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the retriever. Defaults to None
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a retriever with its
use case.
param project_id: str [Required]¶
Google Cloud Project ID.
param query_expansion_condition: int = 1¶
Specification to determine under which conditions query expansion should occur.
0 - Unspecified query expansion condition. In this case, server behavior defaults
to disabled
1 - Disabled query expansion. Only the exact search query is used, even ifSearchResponse.total_size is zero.
2 - Automatic query expansion built by the Search API.
Constraints
minimum = 0
maximum = 2
param serving_config_id: str = 'default_config'¶
Vertex AI Search serving config ID.
param spell_correction_mode: int = 2¶
Specification to determine under which conditions query expansion should occur.
0 - Unspecified spell correction mode. In this case, server behavior defaults
to auto.
1 - Suggestion only. Search API will try to find a spell suggestion if there is anyand put in the SearchResponse.corrected_query.
The spell suggestion will not be used as the search query.
2 - Automatic spell correction built by the Search API.Search will be based on the corrected query if found.
Constraints
minimum = 0
maximum = 2
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the retriever. Defaults to None
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.google_vertex_ai_search.GoogleCloudEnterpriseSearchRetriever.html
|
a2675babdd13-2
|
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a retriever with its
use case.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async aget_relevant_documents(query: str, *, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → List[Document]¶
Asynchronously get documents relevant to a query.
:param query: string to find relevant documents for
:param callbacks: Callback manager or list of callbacks
:param tags: Optional list of tags associated with the retriever. Defaults to None
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Parameters
metadata – Optional metadata associated with the retriever. Defaults to None
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in callbacks.
Returns
List of relevant documents
async ainvoke(input: str, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → List[Document]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.google_vertex_ai_search.GoogleCloudEnterpriseSearchRetriever.html
|
a2675babdd13-3
|
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.google_vertex_ai_search.GoogleCloudEnterpriseSearchRetriever.html
|
a2675babdd13-4
|
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.google_vertex_ai_search.GoogleCloudEnterpriseSearchRetriever.html
|
a2675babdd13-5
|
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.google_vertex_ai_search.GoogleCloudEnterpriseSearchRetriever.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.