Dataset Viewer
id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
154
|
---|---|---|
38cccc443c5d-0 | API Reference¶
langchain.agents: Agents¶
Interface for agents.
Classes¶
agents.agent.Agent
Class responsible for calling the language model and deciding the action.
agents.agent.AgentExecutor
Consists of an agent using tools.
agents.agent.AgentOutputParser
Create a new model by parsing and validating input data from keyword arguments.
agents.agent.BaseMultiActionAgent
Base Agent class.
agents.agent.BaseSingleActionAgent
Base Agent class.
agents.agent.ExceptionTool
Create a new model by parsing and validating input data from keyword arguments.
agents.agent.LLMSingleActionAgent
Create a new model by parsing and validating input data from keyword arguments.
agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit
Toolkit for Azure Cognitive Services.
agents.agent_toolkits.base.BaseToolkit
Class representing a collection of related tools.
agents.agent_toolkits.file_management.toolkit.FileManagementToolkit
Toolkit for interacting with a Local Files.
agents.agent_toolkits.gmail.toolkit.GmailToolkit
Toolkit for interacting with Gmail.
agents.agent_toolkits.jira.toolkit.JiraToolkit
Jira Toolkit.
agents.agent_toolkits.json.toolkit.JsonToolkit
Toolkit for interacting with a JSON spec.
agents.agent_toolkits.nla.tool.NLATool
Natural Language API Tool.
agents.agent_toolkits.nla.toolkit.NLAToolkit
Natural Language API Toolkit Definition.
agents.agent_toolkits.office365.toolkit.O365Toolkit
Toolkit for interacting with Office365.
agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing
Create a new model by parsing and validating input data from keyword arguments.
agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing
Create a new model by parsing and validating input data from keyword arguments.
agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing
Create a new model by parsing and validating input data from keyword arguments. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-1 | Create a new model by parsing and validating input data from keyword arguments.
agents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing
Create a new model by parsing and validating input data from keyword arguments.
agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit
Toolkit for interacting with an OpenAPI API.
agents.agent_toolkits.openapi.toolkit.RequestsToolkit
Toolkit for making requests.
agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit
Toolkit for web browser tools.
agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit
Toolkit for interacting with PowerBI dataset.
agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit
Toolkit for interacting with Spark SQL.
agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit
Toolkit for interacting with SQL databases.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo
Information about a vectorstore.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit
Toolkit for routing between vector stores.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit
Toolkit for interacting with a vector store.
agents.agent_toolkits.zapier.toolkit.ZapierToolkit
Zapier Toolkit.
agents.agent_types.AgentType(value[, names, ...])
Enumerator with the Agent types.
agents.chat.base.ChatAgent
Create a new model by parsing and validating input data from keyword arguments.
agents.chat.output_parser.ChatOutputParser
Create a new model by parsing and validating input data from keyword arguments.
agents.conversational.base.ConversationalAgent
An agent designed to hold a conversation in addition to using tools.
agents.conversational.output_parser.ConvoOutputParser
Create a new model by parsing and validating input data from keyword arguments.
agents.conversational_chat.base.ConversationalChatAgent
An agent designed to hold a conversation in addition to using tools. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-2 | An agent designed to hold a conversation in addition to using tools.
agents.conversational_chat.output_parser.ConvoOutputParser
Create a new model by parsing and validating input data from keyword arguments.
agents.mrkl.base.ChainConfig(action_name, ...)
Configuration for chain to use in MRKL system.
agents.mrkl.base.MRKLChain
Chain that implements the MRKL system.
agents.mrkl.base.ZeroShotAgent
Agent for the MRKL chain.
agents.mrkl.output_parser.MRKLOutputParser
Create a new model by parsing and validating input data from keyword arguments.
agents.openai_functions_agent.base.OpenAIFunctionsAgent
An Agent driven by OpenAIs function powered API.
agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent
An Agent driven by OpenAIs function powered API.
agents.react.base.ReActChain
Chain that implements the ReAct paper.
agents.react.base.ReActDocstoreAgent
Agent for the ReAct chain.
agents.react.base.ReActTextWorldAgent
Agent for the ReAct TextWorld chain.
agents.react.output_parser.ReActOutputParser
Create a new model by parsing and validating input data from keyword arguments.
agents.schema.AgentScratchPadChatPromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
agents.self_ask_with_search.base.SelfAskWithSearchAgent
Agent for the self-ask-with-search paper.
agents.self_ask_with_search.base.SelfAskWithSearchChain
Chain that does self-ask with search.
agents.self_ask_with_search.output_parser.SelfAskOutputParser
Create a new model by parsing and validating input data from keyword arguments.
agents.structured_chat.base.StructuredChatAgent
Create a new model by parsing and validating input data from keyword arguments.
agents.structured_chat.output_parser.StructuredChatOutputParser | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-3 | agents.structured_chat.output_parser.StructuredChatOutputParser
Create a new model by parsing and validating input data from keyword arguments.
agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries
Create a new model by parsing and validating input data from keyword arguments.
agents.tools.InvalidTool
Tool that is run when invalid tool name is encountered by agent.
Functions¶
agents.agent_toolkits.csv.base.create_csv_agent(...)
Create csv agent by loading to a dataframe and using pandas agent.
agents.agent_toolkits.json.base.create_json_agent(...)
Construct a json agent from an LLM and tools.
agents.agent_toolkits.openapi.base.create_openapi_agent(...)
Construct a json agent from an LLM and tools.
agents.agent_toolkits.openapi.planner.create_openapi_agent(...)
Instantiate API planner and controller for a given spec.
agents.agent_toolkits.openapi.spec.dereference_refs(...)
Try to substitute $refs.
agents.agent_toolkits.openapi.spec.reduce_openapi_spec(spec)
Simplify/distill/minify a spec somehow.
agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm, df)
Construct a pandas agent from an LLM and dataframe.
agents.agent_toolkits.powerbi.base.create_pbi_agent(llm)
Construct a pbi agent from an LLM and tools.
agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm)
Construct a Power BI agent from a Chat LLM and tools.
agents.agent_toolkits.python.base.create_python_agent(...)
Construct a python agent from an LLM and tool.
agents.agent_toolkits.spark.base.create_spark_dataframe_agent(llm, df)
Construct a spark agent from an LLM and dataframe.
agents.agent_toolkits.spark_sql.base.create_spark_sql_agent(...)
Construct a sql agent from an LLM and tools.
agents.agent_toolkits.sql.base.create_sql_agent(...) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-4 | agents.agent_toolkits.sql.base.create_sql_agent(...)
Construct a sql agent from an LLM and tools.
agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(...)
Construct a vectorstore agent from an LLM and tools.
agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(...)
Construct a vectorstore router agent from an LLM and tools.
agents.initialize.initialize_agent(tools, llm)
Load an agent executor given tools and LLM.
agents.load_tools.get_all_tool_names()
Get a list of all possible tool names.
agents.load_tools.load_huggingface_tool(...)
Loads a tool from the HuggingFace Hub.
agents.load_tools.load_tools(tool_names[, ...])
Load tools based on their name.
agents.loading.load_agent(path, **kwargs)
Unified method for loading a agent from LangChainHub or local fs.
agents.loading.load_agent_from_config(config)
Load agent from Config Dict.
agents.utils.validate_tools_single_input(...)
Validate tools for single input.
langchain.cache: Cache¶
Beta Feature: base interface for cache.
Classes¶
cache.BaseCache()
Base interface for cache.
cache.FullLLMCache(**kwargs)
SQLite table for full LLM Cache (all generations).
cache.GPTCache([init_func])
Cache that uses GPTCache as a backend.
cache.InMemoryCache()
Cache that stores things in memory.
cache.MomentoCache(cache_client, cache_name, *)
Cache that uses Momento as a backend.
cache.RedisCache(redis_)
Cache that uses Redis as a backend.
cache.RedisSemanticCache(redis_url, embedding)
Cache that uses Redis as a vector-store backend.
cache.SQLAlchemyCache(engine, cache_schema)
Cache that uses SQAlchemy as a backend.
cache.SQLiteCache([database_path]) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-5 | Cache that uses SQAlchemy as a backend.
cache.SQLiteCache([database_path])
Cache that uses SQLite as a backend.
langchain.callbacks: Callbacks¶
Callback handlers that allow listening to events in LangChain.
Classes¶
callbacks.aim_callback.AimCallbackHandler([...])
Callback Handler that logs to Aim.
callbacks.argilla_callback.ArgillaCallbackHandler(...)
Callback Handler that logs into Argilla.
callbacks.arize_callback.ArizeCallbackHandler([...])
Callback Handler that logs to Arize.
callbacks.arthur_callback.ArthurCallbackHandler(...)
Callback Handler that logs to Arthur platform.
callbacks.base.AsyncCallbackHandler()
Async callback handler that can be used to handle callbacks from langchain.
callbacks.base.BaseCallbackHandler()
Base callback handler that can be used to handle callbacks from langchain.
callbacks.base.BaseCallbackManager(handlers)
Base callback manager that can be used to handle callbacks from LangChain.
callbacks.clearml_callback.ClearMLCallbackHandler([...])
Callback Handler that logs to ClearML.
callbacks.comet_ml_callback.CometCallbackHandler([...])
Callback Handler that logs to Comet.
callbacks.context_callback.ContextCallbackHandler([...])
Callback Handler that records transcripts to Context (https://getcontext.ai).
callbacks.file.FileCallbackHandler(filename)
Callback Handler that writes to a file.
callbacks.flyte_callback.FlyteCallbackHandler()
This callback handler is designed specifically for usage within a Flyte task.
callbacks.human.HumanApprovalCallbackHandler(...)
Callback for manually validating values.
callbacks.human.HumanRejectedException
Exception to raise when a person manually review and rejects a value.
callbacks.infino_callback.InfinoCallbackHandler([...])
Callback Handler that logs to Infino.
callbacks.manager.AsyncCallbackManager(handlers)
Async callback manager that can be used to handle callbacks from LangChain.
callbacks.manager.AsyncCallbackManagerForChainRun(*, ...) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-6 | callbacks.manager.AsyncCallbackManagerForChainRun(*, ...)
Async callback manager for chain run.
callbacks.manager.AsyncCallbackManagerForLLMRun(*, ...)
Async callback manager for LLM run.
callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, ...)
Async callback manager for retriever run.
callbacks.manager.AsyncCallbackManagerForToolRun(*, ...)
Async callback manager for tool run.
callbacks.manager.AsyncParentRunManager(*, ...)
Async Parent Run Manager.
callbacks.manager.AsyncRunManager(*, run_id, ...)
Async Run Manager.
callbacks.manager.BaseRunManager(*, run_id, ...)
Base class for run manager (a bound callback manager).
callbacks.manager.CallbackManager(handlers)
Callback manager that can be used to handle callbacks from langchain.
callbacks.manager.CallbackManagerForChainRun(*, ...)
Callback manager for chain run.
callbacks.manager.CallbackManagerForLLMRun(*, ...)
Callback manager for LLM run.
callbacks.manager.CallbackManagerForRetrieverRun(*, ...)
Callback manager for retriever run.
callbacks.manager.CallbackManagerForToolRun(*, ...)
Callback manager for tool run.
callbacks.manager.ParentRunManager(*, ...[, ...])
Sync Parent Run Manager.
callbacks.manager.RunManager(*, run_id, ...)
Sync Run Manager.
callbacks.mlflow_callback.MlflowCallbackHandler([...])
Callback Handler that logs metrics and artifacts to mlflow server.
callbacks.openai_info.OpenAICallbackHandler()
Callback Handler that tracks OpenAI info.
callbacks.promptlayer_callback.PromptLayerCallbackHandler([...])
Callback handler for promptlayer.
callbacks.stdout.StdOutCallbackHandler([color])
Callback Handler that prints to std out.
callbacks.streaming_aiter.AsyncIteratorCallbackHandler()
Callback handler that returns an async iterator. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-7 | callbacks.streaming_aiter.AsyncIteratorCallbackHandler()
Callback handler that returns an async iterator.
callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*)
Callback handler that returns an async iterator.
callbacks.streaming_stdout.StreamingStdOutCallbackHandler()
Callback handler for streaming.
callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*)
Callback handler for streaming in agents.
callbacks.streamlit.mutable_expander.ChildRecord(...)
The child record as a NamedTuple.
callbacks.streamlit.mutable_expander.ChildType(value)
The enumerator of the child type.
callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value)
Enumerator of the LLMThought state.
callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(...)
A callback handler that writes to a Streamlit app.
callbacks.streamlit.streamlit_callback_handler.ToolRecord(...)
The tool record as a NamedTuple.
callbacks.tracers.base.BaseTracer(**kwargs)
Base interface for tracers.
callbacks.tracers.base.TracerException
Base class for exceptions in tracers module.
callbacks.tracers.evaluation.EvaluatorCallbackHandler(...)
A tracer that runs a run evaluator whenever a run is persisted.
callbacks.tracers.langchain.LangChainTracer([...])
An implementation of the SharedTracer that POSTS to the langchain endpoint.
callbacks.tracers.langchain_v1.LangChainTracerV1(...)
An implementation of the SharedTracer that POSTS to the langchain endpoint.
callbacks.tracers.run_collector.RunCollectorCallbackHandler([...])
A tracer that collects all nested runs in a list.
callbacks.tracers.schemas.BaseRun
Base class for Run.
callbacks.tracers.schemas.ChainRun
Class for ChainRun.
callbacks.tracers.schemas.LLMRun
Class for LLMRun.
callbacks.tracers.schemas.Run
Run schema for the V2 API in the Tracer. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-8 | callbacks.tracers.schemas.Run
Run schema for the V2 API in the Tracer.
callbacks.tracers.schemas.ToolRun
Class for ToolRun.
callbacks.tracers.schemas.TracerSession
TracerSessionV1 schema for the V2 API.
callbacks.tracers.schemas.TracerSessionBase
A creation class for TracerSession.
callbacks.tracers.schemas.TracerSessionV1
TracerSessionV1 schema.
callbacks.tracers.schemas.TracerSessionV1Base
Base class for TracerSessionV1.
callbacks.tracers.schemas.TracerSessionV1Create
Create class for TracerSessionV1.
callbacks.tracers.stdout.ConsoleCallbackHandler(...)
Tracer that prints to the console.
callbacks.tracers.wandb.WandbRunArgs
Arguments for the WandbTracer.
callbacks.tracers.wandb.WandbTracer([run_args])
Callback Handler that logs to Weights and Biases.
callbacks.wandb_callback.WandbCallbackHandler([...])
Callback Handler that logs to Weights and Biases.
callbacks.whylabs_callback.WhyLabsCallbackHandler(logger)
Callback Handler for logging to WhyLabs.
Functions¶
callbacks.aim_callback.import_aim()
Import the aim python package and raise an error if it is not installed.
callbacks.clearml_callback.import_clearml()
Import the clearml python package and raise an error if it is not installed.
callbacks.comet_ml_callback.import_comet_ml()
Import comet_ml and raise an error if it is not installed.
callbacks.context_callback.import_context()
callbacks.flyte_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.flyte_callback.import_flytekit()
Import flytekit and flytekitplugins-deck-standard.
callbacks.infino_callback.import_infino() | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-9 | callbacks.infino_callback.import_infino()
Import the infino client.
callbacks.manager.env_var_is_set(env_var)
Check if an environment variable is set.
callbacks.manager.get_openai_callback()
Get the OpenAI callback handler in a context manager.
callbacks.manager.trace_as_chain_group(...)
Get a callback manager for a chain group in a context manager.
callbacks.manager.tracing_enabled([session_name])
Get the Deprecated LangChainTracer in a context manager.
callbacks.manager.tracing_v2_enabled([...])
Instruct LangChain to log all runs in context to LangSmith.
callbacks.manager.wandb_tracing_enabled([...])
Get the WandbTracer in a context manager.
callbacks.mlflow_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.mlflow_callback.construct_html_from_prompt_and_generation(...)
Construct an html element from a prompt and a generation.
callbacks.mlflow_callback.import_mlflow()
Import the mlflow python package and raise an error if it is not installed.
callbacks.openai_info.get_openai_token_cost_for_model(...)
Get the cost in USD for a given model and number of tokens.
callbacks.openai_info.standardize_model_name(...)
Standardize the model name to a format that can be used in the OpenAI API. :param model_name: Model name to standardize. :param is_completion: Whether the model is used for completion or not. Defaults to False.
callbacks.streamlit.__init__.StreamlitCallbackHandler(...)
Construct a new StreamlitCallbackHandler.
callbacks.tracers.langchain.log_error_once(...)
Log an error once.
callbacks.tracers.langchain.wait_for_all_tracers()
Wait for all tracers to finish.
callbacks.tracers.langchain_v1.get_headers()
Get the headers for the LangChain API.
callbacks.tracers.stdout.elapsed(run) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-10 | Get the headers for the LangChain API.
callbacks.tracers.stdout.elapsed(run)
Get the elapsed time of a run.
callbacks.tracers.stdout.try_json_stringify(...)
Try to stringify an object to JSON.
callbacks.utils.flatten_dict(nested_dict[, ...])
Flattens a nested dictionary into a flat dictionary.
callbacks.utils.hash_string(s)
Hash a string using sha1.
callbacks.utils.import_pandas()
Import the pandas python package and raise an error if it is not installed.
callbacks.utils.import_spacy()
Import the spacy python package and raise an error if it is not installed.
callbacks.utils.import_textstat()
Import the textstat python package and raise an error if it is not installed.
callbacks.utils.load_json(json_path)
Load json file to a string.
callbacks.wandb_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.wandb_callback.construct_html_from_prompt_and_generation(...)
Construct an html element from a prompt and a generation.
callbacks.wandb_callback.import_wandb()
Import the wandb python package and raise an error if it is not installed.
callbacks.wandb_callback.load_json_to_dict(...)
Load json file to a dictionary.
callbacks.whylabs_callback.import_langkit([...])
Import the langkit python package and raise an error if it is not installed.
langchain.chains: Chains¶
Chains are easily reusable components which can be linked together.
Classes¶
chains.api.base.APIChain
Chain that makes API calls and summarizes the responses to answer a question.
chains.api.openapi.chain.OpenAPIEndpointChain
Chain interacts with an OpenAPI endpoint using natural language.
chains.api.openapi.requests_chain.APIRequesterChain
Get the request parser.
chains.api.openapi.requests_chain.APIRequesterOutputParser
Parse the request and error tags. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-11 | chains.api.openapi.requests_chain.APIRequesterOutputParser
Parse the request and error tags.
chains.api.openapi.response_chain.APIResponderChain
Get the response parser.
chains.api.openapi.response_chain.APIResponderOutputParser
Parse the response and error tags.
chains.base.Chain
Abstract base class for creating structured sequences of calls to components.
chains.combine_documents.base.AnalyzeDocumentChain
Chain that splits documents, then analyzes it in pieces.
chains.combine_documents.base.BaseCombineDocumentsChain
Base interface for chains combining documents.
chains.combine_documents.map_reduce.MapReduceDocumentsChain
Combining documents by mapping a chain over them, then combining results.
chains.combine_documents.map_rerank.MapRerankDocumentsChain
Combining documents by mapping a chain over them, then reranking results.
chains.combine_documents.reduce.AsyncCombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.CombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.ReduceDocumentsChain
Combining documents by recursively reducing them.
chains.combine_documents.refine.RefineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
chains.combine_documents.stuff.StuffDocumentsChain
Chain that combines documents by stuffing into context.
chains.constitutional_ai.base.ConstitutionalChain
Chain for applying constitutional principles.
chains.constitutional_ai.models.ConstitutionalPrinciple
Class for a constitutional principle.
chains.conversation.base.ConversationChain
Chain to have a conversation and load context from memory.
chains.conversational_retrieval.base.BaseConversationalRetrievalChain
Chain for chatting with an index.
chains.conversational_retrieval.base.ChatVectorDBChain
Chain for chatting with a vector database.
chains.conversational_retrieval.base.ConversationalRetrievalChain
Chain for having a conversation based on retrieved documents.
chains.flare.base.FlareChain | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-12 | Chain for having a conversation based on retrieved documents.
chains.flare.base.FlareChain
Create a new model by parsing and validating input data from keyword arguments.
chains.flare.base.QuestionGeneratorChain
Create a new model by parsing and validating input data from keyword arguments.
chains.flare.prompts.FinishedOutputParser
Create a new model by parsing and validating input data from keyword arguments.
chains.graph_qa.base.GraphQAChain
Chain for question-answering against a graph.
chains.graph_qa.cypher.GraphCypherQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.hugegraph.HugeGraphQAChain
Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.kuzu.KuzuQAChain
Chain for question-answering against a graph by generating Cypher statements for Kùzu.
chains.graph_qa.nebulagraph.NebulaGraphQAChain
Chain for question-answering against a graph by generating nGQL statements.
chains.graph_qa.sparql.GraphSparqlQAChain
Chain for question-answering against an RDF or OWL graph by generating SPARQL statements.
chains.hyde.base.HypotheticalDocumentEmbedder
Generate hypothetical document for query, and then embed that.
chains.llm.LLMChain
Chain to run queries against LLMs.
chains.llm_bash.base.LLMBashChain
Chain that interprets a prompt and executes bash code to perform bash operations.
chains.llm_bash.prompt.BashOutputParser
Parser for bash output.
chains.llm_checker.base.LLMCheckerChain
Chain for question-answering with self-verification.
chains.llm_math.base.LLMMathChain
Chain that interprets a prompt and executes python code to do math. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-13 | Chain that interprets a prompt and executes python code to do math.
chains.llm_requests.LLMRequestsChain
Chain that hits a URL and then uses an LLM to parse results.
chains.llm_summarization_checker.base.LLMSummarizationCheckerChain
Chain for question-answering with self-verification.
chains.mapreduce.MapReduceChain
Map-reduce chain.
chains.moderation.OpenAIModerationChain
Pass input through a moderation endpoint.
chains.natbot.base.NatBotChain
Implement an LLM driven browser.
chains.natbot.crawler.ElementInViewPort
A typed dictionary containing information about elements in the viewport.
chains.openai_functions.citation_fuzzy_match.FactWithEvidence
Class representing single statement.
chains.openai_functions.citation_fuzzy_match.QuestionAnswer
A question and its answer as a list of facts each one should have a source.
chains.openai_functions.openapi.SimpleRequestChain
Create a new model by parsing and validating input data from keyword arguments.
chains.openai_functions.qa_with_structure.AnswerWithSources
An answer to the question being asked, with sources.
chains.pal.base.PALChain
Implements Program-Aided Language Models.
chains.prompt_selector.BasePromptSelector
Create a new model by parsing and validating input data from keyword arguments.
chains.prompt_selector.ConditionalPromptSelector
Prompt collection that goes through conditionals.
chains.qa_generation.base.QAGenerationChain
Create a new model by parsing and validating input data from keyword arguments.
chains.qa_with_sources.base.BaseQAWithSourcesChain
Question answering with sources over documents.
chains.qa_with_sources.base.QAWithSourcesChain
Question answering with sources over documents.
chains.qa_with_sources.loading.LoadingCallable(...)
Interface for loading the combine documents chain.
chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-14 | chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain
Question-answering with sources over an index.
chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain
Question-answering with sources over a vector database.
chains.query_constructor.base.StructuredQueryOutputParser
Create a new model by parsing and validating input data from keyword arguments.
chains.query_constructor.ir.Comparator(value)
Enumerator of the comparison operators.
chains.query_constructor.ir.Comparison
A comparison to a value.
chains.query_constructor.ir.Expr
Create a new model by parsing and validating input data from keyword arguments.
chains.query_constructor.ir.FilterDirective
A filtering expression.
chains.query_constructor.ir.Operation
A logical operation over other directives.
chains.query_constructor.ir.Operator(value)
Enumerator of the operations.
chains.query_constructor.ir.StructuredQuery
Create a new model by parsing and validating input data from keyword arguments.
chains.query_constructor.ir.Visitor()
Defines interface for IR translation using visitor pattern.
chains.query_constructor.parser.QueryTransformer
chains.query_constructor.schema.AttributeInfo
Information about a data source attribute.
chains.question_answering.__init__.LoadingCallable(...)
Interface for loading the combine documents chain.
chains.retrieval_qa.base.BaseRetrievalQA
Create a new model by parsing and validating input data from keyword arguments.
chains.retrieval_qa.base.RetrievalQA
Chain for question-answering against an index.
chains.retrieval_qa.base.VectorDBQA
Chain for question-answering against a vector database.
chains.router.base.MultiRouteChain
Use a single chain to route an input to one of multiple candidate chains.
chains.router.base.Route(destination, ...)
Create new instance of Route(destination, next_inputs)
chains.router.base.RouterChain
Chain that outputs the name of a destination chain and the inputs to it. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-15 | Chain that outputs the name of a destination chain and the inputs to it.
chains.router.embedding_router.EmbeddingRouterChain
Class that uses embeddings to route between options.
chains.router.llm_router.LLMRouterChain
A router chain that uses an LLM chain to perform routing.
chains.router.llm_router.RouterOutputParser
Parser for output of router chain int he multi-prompt chain.
chains.router.multi_prompt.MultiPromptChain
A multi-route chain that uses an LLM router chain to choose amongst prompts.
chains.router.multi_retrieval_qa.MultiRetrievalQAChain
A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains.
chains.sequential.SequentialChain
Chain where the outputs of one chain feed directly into next.
chains.sequential.SimpleSequentialChain
Simple chain where the outputs of one step feed directly into next.
chains.sql_database.base.SQLDatabaseChain
Chain for interacting with SQL Database.
chains.sql_database.base.SQLDatabaseSequentialChain
Chain for querying SQL database that is a sequential chain.
chains.summarize.__init__.LoadingCallable(...)
Interface for loading the combine documents chain.
chains.transform.TransformChain
Chain transform chain output.
Functions¶
chains.graph_qa.cypher.extract_cypher(text)
Extract Cypher code from a text.
chains.loading.load_chain(path, **kwargs)
Unified method for loading a chain from LangChainHub or local fs.
chains.loading.load_chain_from_config(...)
Load chain from Config Dict.
chains.openai_functions.base.convert_python_function_to_openai_function(...)
Convert a Python function to an OpenAI function-calling API compatible dict.
chains.openai_functions.base.convert_to_openai_function(...)
Convert a raw function/class to an OpenAI function.
chains.openai_functions.base.create_openai_fn_chain(...)
Create an LLM chain that uses OpenAI functions. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-16 | Create an LLM chain that uses OpenAI functions.
chains.openai_functions.base.create_structured_output_chain(...)
Create an LLMChain that uses an OpenAI function to get a structured output.
chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm)
Create a citation fuzzy match chain.
chains.openai_functions.extraction.create_extraction_chain(...)
Creates a chain that extracts information from a passage.
chains.openai_functions.extraction.create_extraction_chain_pydantic(...)
Creates a chain that extracts information from a passage using pydantic schema.
chains.openai_functions.openapi.get_openapi_chain(spec)
Create a chain for querying an API from a OpenAPI spec.
chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec)
Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI
chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(...)
Create a question answering chain that returns an answer with sources.
chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(...)
Create a question answering chain that returns an answer with sources.
chains.openai_functions.tagging.create_tagging_chain(...)
Creates a chain that extracts information from a passage.
chains.openai_functions.tagging.create_tagging_chain_pydantic(...)
Creates a chain that extracts information from a passage.
chains.openai_functions.utils.get_llm_kwargs(...)
Returns the kwargs for the LLMChain constructor.
chains.prompt_selector.is_chat_model(llm)
Check if the language model is a chat model.
chains.prompt_selector.is_llm(llm)
Check if the language model is a LLM.
chains.qa_with_sources.loading.load_qa_with_sources_chain(llm)
Load question answering with sources chain.
chains.query_constructor.base.load_query_constructor_chain(...)
Load a query constructor chain.
chains.query_constructor.parser.get_parser([...]) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-17 | Load a query constructor chain.
chains.query_constructor.parser.get_parser([...])
Returns a parser for the query language.
chains.question_answering.__init__.load_qa_chain(llm)
Load question answering chain.
chains.summarize.__init__.load_summarize_chain(llm)
Load summarizing chain.
langchain.chat_models: Chat Models¶
Classes¶
chat_models.anthropic.ChatAnthropic
Wrapper around Anthropic's large language model.
chat_models.azure_openai.AzureChatOpenAI
Wrapper around Azure OpenAI Chat Completion API.
chat_models.base.BaseChatModel
Create a new model by parsing and validating input data from keyword arguments.
chat_models.base.SimpleChatModel
Create a new model by parsing and validating input data from keyword arguments.
chat_models.fake.FakeListChatModel
Fake ChatModel for testing purposes.
chat_models.google_palm.ChatGooglePalm
Wrapper around Google's PaLM Chat API.
chat_models.google_palm.ChatGooglePalmError
Error raised when there is an issue with the Google PaLM API.
chat_models.human.HumanInputChatModel
ChatModel wrapper which returns user input as the response..
chat_models.jinachat.JinaChat
JinaChat is a wrapper for Jina AI's LLM service, providing cost-effective image chat capabilities in comparison to other LLM APIs.
chat_models.openai.ChatOpenAI
Wrapper around OpenAI Chat large language models.
chat_models.promptlayer_openai.PromptLayerChatOpenAI
Wrapper around OpenAI Chat large language models and PromptLayer.
chat_models.vertexai.ChatVertexAI
Wrapper around Vertex AI large language models.
Functions¶
chat_models.google_palm.chat_with_retry(llm, ...)
Use tenacity to retry the completion call.
langchain.client: Client¶
LangChain + Client.
Classes¶
client.runner_utils.InputFormatError | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-18 | LangChain + Client.
Classes¶
client.runner_utils.InputFormatError
Raised when the input format is invalid.
Functions¶
client.runner_utils.run_llm(llm, inputs, ...)
Run the language model on the example.
client.runner_utils.run_llm_or_chain(...[, ...])
Run the Chain or language model synchronously.
client.runner_utils.run_on_dataset(...[, ...])
Run the Chain or language model on a dataset and store traces to the specified project name.
client.runner_utils.run_on_examples(...[, ...])
Run the Chain or language model on examples and store traces to the specified project name.
langchain.docstore: Docstore¶
Wrappers on top of docstores.
Classes¶
docstore.arbitrary_fn.DocstoreFn(lookup_fn)
Langchain Docstore via arbitrary lookup function.
docstore.base.AddableMixin()
Mixin class that supports adding texts.
docstore.base.Docstore()
Interface to access to place that stores documents.
docstore.in_memory.InMemoryDocstore([_dict])
Simple in memory docstore in the form of a dict.
docstore.wikipedia.Wikipedia()
Wrapper around wikipedia API.
langchain.document_loaders: Document Loaders¶
All different types of document loaders.
Classes¶
document_loaders.acreom.AcreomLoader(path[, ...])
Loader that loads acreom vault from a directory.
document_loaders.airbyte_json.AirbyteJSONLoader(...)
Loader that loads local airbyte json files.
document_loaders.airtable.AirtableLoader(...)
Loader for Airtable tables.
document_loaders.apify_dataset.ApifyDatasetLoader
Loading Documents from Apify datasets.
document_loaders.arxiv.ArxivLoader(query[, ...])
Loads a query result from arxiv.org into a list of Documents. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-19 | Loads a query result from arxiv.org into a list of Documents.
document_loaders.azlyrics.AZLyricsLoader(...)
Loader that loads AZLyrics webpages.
document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(...)
Loading Documents from Azure Blob Storage.
document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(...)
Loading Documents from Azure Blob Storage.
document_loaders.base.BaseBlobParser()
Abstract interface for blob parsers.
document_loaders.base.BaseLoader()
Interface for loading Documents.
document_loaders.bibtex.BibtexLoader(...[, ...])
Loads a bibtex file into a list of Documents.
document_loaders.bigquery.BigQueryLoader(query)
Loads a query result from BigQuery into a list of documents.
document_loaders.bilibili.BiliBiliLoader(...)
Loader that loads bilibili transcripts.
document_loaders.blackboard.BlackboardLoader(...)
Loads all documents from a Blackboard course.
document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path, *)
Blob loader for the local file system.
document_loaders.blob_loaders.schema.Blob
A blob is used to represent raw data by either reference or value.
document_loaders.blob_loaders.schema.BlobLoader()
Abstract interface for blob loaders implementation.
document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(...)
Load YouTube urls as audio file(s).
document_loaders.blockchain.BlockchainDocumentLoader(...)
Loads elements from a blockchain smart contract into Langchain documents.
document_loaders.blockchain.BlockchainType(value)
Enumerator of the supported blockchains.
document_loaders.brave_search.BraveSearchLoader(...)
Loads a query result from Brave Search engine into a list of Documents.
document_loaders.chatgpt.ChatGPTLoader(log_file)
Load conversations from exported ChatGPT data. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-20 | Load conversations from exported ChatGPT data.
document_loaders.college_confidential.CollegeConfidentialLoader(...)
Loader that loads College Confidential webpages.
document_loaders.confluence.ConfluenceLoader(url)
Load Confluence pages.
document_loaders.confluence.ContentFormat(value)
Enumerator of the content formats of Confluence page.
document_loaders.conllu.CoNLLULoader(file_path)
Load CoNLL-U files.
document_loaders.csv_loader.CSVLoader(file_path)
Loads a CSV file into a list of documents.
document_loaders.csv_loader.UnstructuredCSVLoader(...)
Loader that uses unstructured to load CSV files.
document_loaders.cube_semantic.CubeSemanticLoader(...)
Load Cube semantic layer metadata.
document_loaders.dataframe.DataFrameLoader(...)
Load Pandas DataFrame.
document_loaders.diffbot.DiffbotLoader(...)
Loads Diffbot file json.
document_loaders.directory.DirectoryLoader(...)
Load documents from a directory.
document_loaders.discord.DiscordChatLoader(...)
Load Discord chat logs.
document_loaders.docugami.DocugamiLoader
Loads processed docs from Docugami.
document_loaders.duckdb_loader.DuckDBLoader(query)
Loads a query result from DuckDB into a list of documents.
document_loaders.email.OutlookMessageLoader(...)
Loads Outlook Message files using extract_msg.
document_loaders.email.UnstructuredEmailLoader(...)
Loader that uses unstructured to load email files.
document_loaders.embaas.BaseEmbaasLoader
Base class for embedding a model into an Embaas document extraction API.
document_loaders.embaas.EmbaasBlobLoader
Embaas's document byte loader.
document_loaders.embaas.EmbaasDocumentExtractionParameters
Parameters for the embaas document extraction API. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-21 | Parameters for the embaas document extraction API.
document_loaders.embaas.EmbaasDocumentExtractionPayload
Payload for the Embaas document extraction API.
document_loaders.embaas.EmbaasLoader
Embaas's document loader.
document_loaders.epub.UnstructuredEPubLoader(...)
Loader that uses unstructured to load epub files.
document_loaders.evernote.EverNoteLoader(...)
EverNote Loader.
document_loaders.excel.UnstructuredExcelLoader(...)
Loader that uses unstructured to load Microsoft Excel files.
document_loaders.facebook_chat.FacebookChatLoader(path)
Loads Facebook messages json directory dump.
document_loaders.fauna.FaunaLoader(query, ...)
FaunaDB Loader.
document_loaders.figma.FigmaFileLoader(...)
Loads Figma file json.
document_loaders.gcs_directory.GCSDirectoryLoader(...)
Loads Documents from GCS.
document_loaders.gcs_file.GCSFileLoader(...)
Load Documents from a GCS file.
document_loaders.generic.GenericLoader(...)
A generic document loader.
document_loaders.git.GitLoader(repo_path[, ...])
Loads files from a Git repository into a list of documents.
document_loaders.gitbook.GitbookLoader(web_page)
Load GitBook data.
document_loaders.github.BaseGitHubLoader
Load issues of a GitHub repository.
document_loaders.github.GitHubIssuesLoader
Load issues of a GitHub repository.
document_loaders.googledrive.GoogleDriveLoader
Loads Google Docs from Google Drive.
document_loaders.gutenberg.GutenbergLoader(...)
Loader that uses urllib to load .txt web files.
document_loaders.helpers.FileEncoding(...)
A file encoding as the NamedTuple.
document_loaders.hn.HNLoader(web_path[, ...]) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-22 | document_loaders.hn.HNLoader(web_path[, ...])
Load Hacker News data from either main page results or the comments page.
document_loaders.html.UnstructuredHTMLLoader(...)
Loader that uses unstructured to load HTML files.
document_loaders.html_bs.BSHTMLLoader(file_path)
Loader that uses beautiful soup to parse HTML files.
document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path)
Load Documents from the Hugging Face Hub.
document_loaders.ifixit.IFixitLoader(web_path)
Load iFixit repair guides, device wikis and answers.
document_loaders.image.UnstructuredImageLoader(...)
Loader that uses unstructured to load image files, such as PNGs and JPGs.
document_loaders.image_captions.ImageCaptionLoader(...)
Loads the captions of an image
document_loaders.imsdb.IMSDbLoader(web_path)
Loads IMSDb webpages.
document_loaders.iugu.IuguLoader(resource[, ...])
Loader that fetches data from IUGU.
document_loaders.joplin.JoplinLoader([...])
Loader that fetches notes from Joplin.
document_loaders.json_loader.JSONLoader(...)
Loads a JSON file using a jq schema.
document_loaders.larksuite.LarkSuiteDocLoader(...)
Loads LarkSuite (FeiShu) document.
document_loaders.markdown.UnstructuredMarkdownLoader(...)
Loader that uses unstructured to load markdown files.
document_loaders.mastodon.MastodonTootsLoader(...)
Mastodon toots loader.
document_loaders.max_compute.MaxComputeLoader(...)
Loads a query result from Alibaba Cloud MaxCompute table into documents.
document_loaders.mediawikidump.MWDumpLoader(...)
Load MediaWiki dump from XML file .
document_loaders.merge.MergedDataLoader(loaders) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-23 | document_loaders.merge.MergedDataLoader(loaders)
Merge documents from a list of loaders
document_loaders.mhtml.MHTMLLoader(file_path)
Loader that uses beautiful soup to parse HTML files.
document_loaders.modern_treasury.ModernTreasuryLoader(...)
Loader that fetches data from Modern Treasury.
document_loaders.notebook.NotebookLoader(path)
Loader that loads .ipynb notebook files.
document_loaders.notion.NotionDirectoryLoader(path)
Loader that loads Notion directory dump.
document_loaders.notiondb.NotionDBLoader(...)
Notion DB Loader.
document_loaders.obsidian.ObsidianLoader(path)
Loader that loads Obsidian files from disk.
document_loaders.odt.UnstructuredODTLoader(...)
Loader that uses unstructured to load open office ODT files.
document_loaders.onedrive.OneDriveLoader
Create a new model by parsing and validating input data from keyword arguments.
document_loaders.onedrive_file.OneDriveFileLoader
Create a new model by parsing and validating input data from keyword arguments.
document_loaders.open_city_data.OpenCityDataLoader(...)
Loader that loads Open city data.
document_loaders.org_mode.UnstructuredOrgModeLoader(...)
Loader that uses unstructured to load Org-Mode files.
document_loaders.parsers.audio.OpenAIWhisperParser([...])
Transcribe and parse audio files.
document_loaders.parsers.generic.MimeTypeBasedParser(...)
A parser that uses mime-types to determine how to parse a blob.
document_loaders.parsers.grobid.GrobidParser(...)
Loader that uses Grobid to load article PDF files.
document_loaders.parsers.grobid.ServerUnavailableException
document_loaders.parsers.html.bs4.BS4HTMLParser(*)
Parser that uses beautiful soup to parse HTML files.
document_loaders.parsers.language.code_segmenter.CodeSegmenter(code) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-24 | document_loaders.parsers.language.code_segmenter.CodeSegmenter(code)
The abstract class for the code segmenter.
document_loaders.parsers.language.javascript.JavaScriptSegmenter(code)
The code segmenter for JavaScript.
document_loaders.parsers.language.language_parser.LanguageParser([...])
Language parser that split code using the respective language syntax.
document_loaders.parsers.language.python.PythonSegmenter(code)
The code segmenter for Python.
document_loaders.parsers.pdf.PDFMinerParser()
Parse PDFs with PDFMiner.
document_loaders.parsers.pdf.PDFPlumberParser([...])
Parse PDFs with PDFPlumber.
document_loaders.parsers.pdf.PyMuPDFParser([...])
Parse PDFs with PyMuPDF.
document_loaders.parsers.pdf.PyPDFParser([...])
Loads a PDF with pypdf and chunks at character level.
document_loaders.parsers.pdf.PyPDFium2Parser()
Parse PDFs with PyPDFium2.
document_loaders.parsers.txt.TextParser()
Parser for text blobs.
document_loaders.pdf.BasePDFLoader(file_path)
Base loader class for PDF files.
document_loaders.pdf.MathpixPDFLoader(file_path)
Initialize with file path.
document_loaders.pdf.OnlinePDFLoader(file_path)
Loader that loads online PDFs.
document_loaders.pdf.PDFMinerLoader(file_path)
Loader that uses PDFMiner to load PDF files.
document_loaders.pdf.PDFMinerPDFasHTMLLoader(...)
Loader that uses PDFMiner to load PDF files as HTML content.
document_loaders.pdf.PDFPlumberLoader(file_path)
Loader that uses pdfplumber to load PDF files.
document_loaders.pdf.PyMuPDFLoader(file_path)
Loader that uses PyMuPDF to load PDF files.
document_loaders.pdf.PyPDFDirectoryLoader(path) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-25 | document_loaders.pdf.PyPDFDirectoryLoader(path)
Loads a directory with PDF files with pypdf and chunks at character level.
document_loaders.pdf.PyPDFLoader(file_path)
Loads a PDF with pypdf and chunks at character level.
document_loaders.pdf.PyPDFium2Loader(file_path)
Loads a PDF with pypdfium2 and chunks at character level.
document_loaders.pdf.UnstructuredPDFLoader(...)
Loader that uses unstructured to load PDF files.
document_loaders.powerpoint.UnstructuredPowerPointLoader(...)
Loader that uses unstructured to load powerpoint files.
document_loaders.psychic.PsychicLoader(...)
Loader that loads documents from Psychic.dev.
document_loaders.pyspark_dataframe.PySparkDataFrameLoader([...])
Load PySpark DataFrames
document_loaders.python.PythonLoader(file_path)
Load Python files, respecting any non-default encoding if specified.
document_loaders.readthedocs.ReadTheDocsLoader(path)
Loader that loads ReadTheDocs documentation directory dump.
document_loaders.recursive_url_loader.RecursiveUrlLoader(url)
Loader that loads all child links from a given url.
document_loaders.reddit.RedditPostsLoader(...)
Reddit posts loader.
document_loaders.roam.RoamLoader(path)
Loader that loads Roam files from disk.
document_loaders.rst.UnstructuredRSTLoader(...)
Loader that uses unstructured to load RST files.
document_loaders.rtf.UnstructuredRTFLoader(...)
Loader that uses unstructured to load rtf files.
document_loaders.s3_directory.S3DirectoryLoader(bucket)
Loading logic for loading documents from s3.
document_loaders.s3_file.S3FileLoader(...)
Loading logic for loading documents from s3.
document_loaders.sitemap.SitemapLoader(web_path) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-26 | document_loaders.sitemap.SitemapLoader(web_path)
Loader that fetches a sitemap and loads those URLs.
document_loaders.slack_directory.SlackDirectoryLoader(...)
Loader for loading documents from a Slack directory dump.
document_loaders.snowflake_loader.SnowflakeLoader(...)
Loads a query result from Snowflake into a list of documents.
document_loaders.spreedly.SpreedlyLoader(...)
Loader that fetches data from Spreedly API.
document_loaders.srt.SRTLoader(file_path)
Loader for .srt (subtitle) files.
document_loaders.stripe.StripeLoader(resource)
Loader that fetches data from Stripe.
document_loaders.telegram.TelegramChatApiLoader([...])
Loader that loads Telegram chat json directory dump.
document_loaders.telegram.TelegramChatFileLoader(path)
Loader that loads Telegram chat json directory dump.
document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader(...)
Loading logic for loading documents from Tencent Cloud COS.
document_loaders.tencent_cos_file.TencentCOSFileLoader(...)
Loading logic for loading documents from Tencent Cloud COS.
document_loaders.text.TextLoader(file_path)
Load text files.
document_loaders.tomarkdown.ToMarkdownLoader(...)
Loader that loads HTML to markdown using 2markdown.
document_loaders.toml.TomlLoader(source)
A TOML document loader that inherits from the BaseLoader class.
document_loaders.trello.TrelloLoader(client, ...)
Trello loader.
document_loaders.twitter.TwitterTweetLoader(...)
Twitter tweets loader.
document_loaders.unstructured.UnstructuredAPIFileIOLoader(file)
UnstructuredAPIFileIOLoader uses the Unstructured API to load files.
document_loaders.unstructured.UnstructuredAPIFileLoader([...])
UnstructuredAPIFileLoader uses the Unstructured API to load files. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-27 | UnstructuredAPIFileLoader uses the Unstructured API to load files.
document_loaders.unstructured.UnstructuredBaseLoader([mode])
Loader that uses unstructured to load files.
document_loaders.unstructured.UnstructuredFileIOLoader(file)
UnstructuredFileIOLoader uses unstructured to load files.
document_loaders.unstructured.UnstructuredFileLoader(...)
UnstructuredFileLoader uses unstructured to load files.
document_loaders.url.UnstructuredURLLoader(urls)
Loader that uses unstructured to load HTML files.
document_loaders.url_playwright.PlaywrightURLLoader(urls)
Loader that uses Playwright and to load a page and unstructured to load the html.
document_loaders.url_selenium.SeleniumURLLoader(urls)
Loader that uses Selenium and to load a page and unstructured to load the html.
document_loaders.weather.WeatherDataLoader(...)
Weather Reader.
document_loaders.web_base.WebBaseLoader(web_path)
Loader that uses urllib and beautiful soup to load webpages.
document_loaders.whatsapp_chat.WhatsAppChatLoader(path)
Loader that loads WhatsApp messages text file.
document_loaders.wikipedia.WikipediaLoader(query)
Loads a query result from www.wikipedia.org into a list of Documents.
document_loaders.word_document.Docx2txtLoader(...)
Loads a DOCX with docx2txt and chunks at character level.
document_loaders.word_document.UnstructuredWordDocumentLoader(...)
Loader that uses unstructured to load word documents.
document_loaders.xml.UnstructuredXMLLoader(...)
Loader that uses unstructured to load XML files.
document_loaders.youtube.GoogleApiYoutubeLoader(...)
Loader that loads all Videos from a Channel
document_loaders.youtube.YoutubeLoader(video_id)
Loader that loads Youtube transcripts.
Functions¶
document_loaders.chatgpt.concatenate_rows(...)
Combine message information in a readable format ready to be used. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-28 | Combine message information in a readable format ready to be used.
document_loaders.facebook_chat.concatenate_rows(row)
Combine message information in a readable format ready to be used.
document_loaders.helpers.detect_file_encodings(...)
Try to detect the file encoding.
document_loaders.notebook.concatenate_cells(...)
Combine cells information in a readable format ready to be used.
document_loaders.notebook.remove_newlines(x)
Remove recursively newlines, no matter the data structure they are stored in.
document_loaders.parsers.registry.get_parser(...)
Get a parser by parser name.
document_loaders.telegram.concatenate_rows(row)
Combine message information in a readable format ready to be used.
document_loaders.telegram.text_to_docs(text)
Converts a string or list of strings to a list of Documents with metadata.
document_loaders.unstructured.get_elements_from_api([...])
Retrieves a list of elements from the Unstructured API.
document_loaders.unstructured.satisfies_min_unstructured_version(...)
Checks to see if the installed unstructured version exceeds the minimum version for the feature in question.
document_loaders.unstructured.validate_unstructured_version(...)
Raises an error if the unstructured version does not exceed the specified minimum.
document_loaders.whatsapp_chat.concatenate_rows(...)
Combine message information in a readable format ready to be used.
langchain.document_transformers: Document Transformers¶
Transform documents
Classes¶
document_transformers.EmbeddingsClusteringFilter
Perform K-means clustering on document vectors.
document_transformers.EmbeddingsRedundantFilter
Filter that drops redundant documents by comparing their embeddings.
Functions¶
document_transformers.get_stateful_documents(...)
Convert a list of documents to a list of documents with state.
langchain.embeddings: Embeddings¶
Wrappers around embedding modules.
Classes¶
embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-29 | Classes¶
embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding
Wrapper for Aleph Alpha's Asymmetric Embeddings AA provides you with an endpoint to embed a document and a query.
embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding
The symmetric version of the Aleph Alpha's semantic embeddings.
embeddings.base.Embeddings()
Interface for embedding models.
embeddings.bedrock.BedrockEmbeddings
Embeddings provider to invoke Bedrock embedding models.
embeddings.clarifai.ClarifaiEmbeddings
Wrapper around Clarifai embedding models.
embeddings.cohere.CohereEmbeddings
Wrapper around Cohere embedding models.
embeddings.dashscope.DashScopeEmbeddings
Wrapper around DashScope embedding models.
embeddings.deepinfra.DeepInfraEmbeddings
Wrapper around Deep Infra's embedding inference service.
embeddings.elasticsearch.ElasticsearchEmbeddings(...)
Wrapper around Elasticsearch embedding models.
embeddings.embaas.EmbaasEmbeddings
Wrapper around embaas's embedding service.
embeddings.embaas.EmbaasEmbeddingsPayload
Payload for the embaas embeddings API.
embeddings.fake.FakeEmbeddings
Create a new model by parsing and validating input data from keyword arguments.
embeddings.google_palm.GooglePalmEmbeddings
Create a new model by parsing and validating input data from keyword arguments.
embeddings.huggingface.HuggingFaceEmbeddings
Wrapper around sentence_transformers embedding models.
embeddings.huggingface.HuggingFaceInstructEmbeddings
Wrapper around sentence_transformers embedding models.
embeddings.huggingface_hub.HuggingFaceHubEmbeddings
Wrapper around HuggingFaceHub embedding models.
embeddings.jina.JinaEmbeddings
Create a new model by parsing and validating input data from keyword arguments.
embeddings.llamacpp.LlamaCppEmbeddings | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-30 | embeddings.llamacpp.LlamaCppEmbeddings
Wrapper around llama.cpp embedding models.
embeddings.minimax.MiniMaxEmbeddings
Wrapper around MiniMax's embedding inference service.
embeddings.modelscope_hub.ModelScopeEmbeddings
Wrapper around modelscope_hub embedding models.
embeddings.mosaicml.MosaicMLInstructorEmbeddings
Wrapper around MosaicML's embedding inference service.
embeddings.octoai_embeddings.OctoAIEmbeddings
Wrapper around OctoAI Compute Service embedding models.
embeddings.openai.OpenAIEmbeddings
Wrapper around OpenAI embedding models.
embeddings.sagemaker_endpoint.EmbeddingsContentHandler()
Content handler for LLM class.
embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings
Wrapper around custom Sagemaker Inference Endpoints.
embeddings.self_hosted.SelfHostedEmbeddings
Runs custom embedding models on self-hosted remote hardware.
embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings
Runs sentence_transformers embedding models on self-hosted remote hardware.
embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
embeddings.spacy_embeddings.SpacyEmbeddings
SpacyEmbeddings is a class for generating embeddings using the Spacy library.
embeddings.tensorflow_hub.TensorflowHubEmbeddings
Wrapper around tensorflow_hub embedding models.
embeddings.vertexai.VertexAIEmbeddings
Create a new model by parsing and validating input data from keyword arguments.
Functions¶
embeddings.dashscope.embed_with_retry(...)
Use tenacity to retry the embedding call.
embeddings.google_palm.embed_with_retry(...)
Use tenacity to retry the completion call.
embeddings.minimax.embed_with_retry(...)
Use tenacity to retry the completion call.
embeddings.openai.embed_with_retry(...) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-31 | Use tenacity to retry the completion call.
embeddings.openai.embed_with_retry(...)
Use tenacity to retry the embedding call.
embeddings.self_hosted_hugging_face.load_embedding_model(...)
Load the embedding model.
langchain.env: Env¶
Functions¶
env.get_runtime_environment()
Get information about the environment.
langchain.evaluation: Evaluation¶
Evaluation chains for grading LLM and Chain outputs.
This module contains off-the-shelf evaluation chains for grading the output of
LangChain primitives such as language models and chains.
Loading an evaluator
To load an evaluator, you can use the load_evaluators or
load_evaluator functions with the
names of the evaluators to load.
from langchain.evaluation import load_evaluator
evaluator = load_evaluator("qa")
evaluator.evaluate_strings(
prediction="We sold more than 40,000 units last week",
input="How many units did we sell last week?",
reference="We sold 32,378 units",
)
The evaluator must be one of EvaluatorType.
Datasets
To load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the
name of the dataset to load.
from langchain.evaluation import load_dataset
ds = load_dataset("llm-math")
Some common use cases for evaluation include:
Grading the accuracy of a response against ground truth answers: QAEvalChain
Comparing the output of two models: PairwiseStringEvalChain
Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain
Checking whether an output complies with a set of criteria: CriteriaEvalChain
Computing semantic difference between a prediction and reference: EmbeddingDistanceEvalChain or between two predictions: PairwiseEmbeddingDistanceEvalChain | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-32 | Measuring the string distance between a prediction and reference StringDistanceEvalChain or between two predictions PairwiseStringDistanceEvalChain
Low-level API
These evaluators implement one of the following interfaces:
StringEvaluator: Evaluate a prediction string against a reference label and/or input context.
PairwiseStringEvaluator: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.
AgentTrajectoryEvaluator Evaluate the full sequence of actions taken by an agent.
These interfaces enable easier composability and usage within a higher level evaluation framework.
Classes¶
evaluation.agents.trajectory_eval_chain.TrajectoryEval(...)
Create new instance of TrajectoryEval(score, reasoning)
evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain
A chain for evaluating ReAct style agents.
evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser
Create a new model by parsing and validating input data from keyword arguments.
evaluation.comparison.eval_chain.PairwiseStringEvalChain
A chain for comparing two outputs, such as the outputs
evaluation.comparison.eval_chain.PairwiseStringResultOutputParser
A parser for the output of the PairwiseStringEvalChain.
evaluation.criteria.eval_chain.CriteriaEvalChain
LLM Chain for evaluating runs against criteria.
evaluation.criteria.eval_chain.CriteriaResultOutputParser
A parser for the output of the CriteriaEvalChain.
evaluation.embedding_distance.base.EmbeddingDistance(value)
Embedding Distance Metric.
evaluation.embedding_distance.base.EmbeddingDistanceEvalChain
Use embedding distances to score semantic difference between a prediction and reference.
evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain
Use embedding distances to score semantic difference between two predictions.
evaluation.qa.eval_chain.ContextQAEvalChain
LLM Chain specifically for evaluating QA w/o GT based on context
evaluation.qa.eval_chain.CotQAEvalChain | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-33 | evaluation.qa.eval_chain.CotQAEvalChain
LLM Chain specifically for evaluating QA using chain of thought reasoning.
evaluation.qa.eval_chain.QAEvalChain
LLM Chain specifically for evaluating question answering.
evaluation.qa.generate_chain.QAGenerateChain
LLM Chain specifically for generating examples for question answering.
evaluation.run_evaluators.base.RunEvaluatorChain
Evaluate Run and optional examples.
evaluation.run_evaluators.base.RunEvaluatorOutputParser
Parse the output of a run.
evaluation.run_evaluators.implementations.ChoicesOutputParser
Parse a feedback run with optional choices.
evaluation.run_evaluators.implementations.CriteriaOutputParser
Parse a criteria results into an evaluation result.
evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper
Maps the Run and Optional[Example] to a dictionary.
evaluation.run_evaluators.implementations.TrajectoryInputMapper
Maps the Run and Optional[Example] to a dictionary.
evaluation.run_evaluators.implementations.TrajectoryRunEvalOutputParser
Create a new model by parsing and validating input data from keyword arguments.
evaluation.run_evaluators.string_run_evaluator.ChainStringRunMapper
Extract items to evaluate from the run object from a chain.
evaluation.run_evaluators.string_run_evaluator.LLMStringRunMapper
Extract items to evaluate from the run object.
evaluation.run_evaluators.string_run_evaluator.StringExampleMapper
Map an example, or row in the dataset, to the inputs of an evaluation.
evaluation.run_evaluators.string_run_evaluator.StringRunEvaluatorChain
Evaluate Run and optional examples.
evaluation.run_evaluators.string_run_evaluator.StringRunMapper
Extract items to evaluate from the run object.
evaluation.run_evaluators.string_run_evaluator.ToolStringRunMapper
Map an input to the tool.
evaluation.schema.AgentTrajectoryEvaluator()
Interface for evaluating agent trajectories. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-34 | evaluation.schema.AgentTrajectoryEvaluator()
Interface for evaluating agent trajectories.
evaluation.schema.EvaluatorType(value[, ...])
The types of the evaluators.
evaluation.schema.LLMEvalChain
A base class for evaluators that use an LLM.
evaluation.schema.PairwiseStringEvaluator()
Compare the output of two models (or two outputs of the same model).
evaluation.schema.StringEvaluator()
Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.
evaluation.string_distance.base.PairwiseStringDistanceEvalChain
Compute string edit distances between two predictions.
evaluation.string_distance.base.StringDistance(value)
Distance metric to use.
evaluation.string_distance.base.StringDistanceEvalChain
Compute string distances between the prediction and the reference.
Functions¶
evaluation.loading.load_dataset(uri)
Load a dataset from the LangChainDatasets HuggingFace org.
evaluation.loading.load_evaluator(evaluator, *)
Load the requested evaluation chain specified by a string.
evaluation.loading.load_evaluators(evaluators, *)
Load evaluators specified by a list of evaluator types.
evaluation.run_evaluators.implementations.get_criteria_evaluator(...)
Get an eval chain for grading a model's response against a map of criteria.
evaluation.run_evaluators.implementations.get_qa_evaluator(llm, *)
Get an eval chain that compares response against ground truth.
evaluation.run_evaluators.implementations.get_trajectory_evaluator(...)
Get an eval chain for grading a model's response against a map of criteria.
evaluation.run_evaluators.loading.load_run_evaluator_for_model(...)
Load evaluators specified by a list of evaluator types.
evaluation.run_evaluators.loading.load_run_evaluators_for_model(...)
Load evaluators specified by a list of evaluator types.
langchain.example_generator: Example Generator¶
Utility functions for working with prompts.
Functions¶ | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-35 | langchain.example_generator: Example Generator¶
Utility functions for working with prompts.
Functions¶
example_generator.generate_example(examples, ...)
Return another example given a list of examples for a prompt.
langchain.experimental: Experimental¶
Classes¶
experimental.autonomous_agents.autogpt.memory.AutoGPTMemory
Create a new model by parsing and validating input data from keyword arguments.
experimental.autonomous_agents.autogpt.output_parser.AutoGPTAction(...)
Create new instance of AutoGPTAction(name, args)
experimental.autonomous_agents.autogpt.output_parser.AutoGPTOutputParser
Create a new model by parsing and validating input data from keyword arguments.
experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser
Create a new model by parsing and validating input data from keyword arguments.
experimental.autonomous_agents.autogpt.prompt.AutoGPTPrompt
Create a new model by parsing and validating input data from keyword arguments.
experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI
Controller model for the BabyAGI agent.
experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain
Chain to generates tasks.
experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain
Chain to execute tasks.
experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain
Chain to prioritize tasks.
experimental.generative_agents.generative_agent.GenerativeAgent
A character with memory and innate characteristics.
experimental.generative_agents.memory.GenerativeAgentMemory
Create a new model by parsing and validating input data from keyword arguments.
experimental.llms.jsonformer_decoder.JsonFormer
Create a new model by parsing and validating input data from keyword arguments.
experimental.llms.rellm_decoder.RELLM
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.agent_executor.PlanAndExecute | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-36 | experimental.plan_and_execute.agent_executor.PlanAndExecute
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.executors.base.BaseExecutor
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.executors.base.ChainExecutor
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.planners.base.BasePlanner
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.planners.base.LLMPlanner
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.schema.BaseStepContainer
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.schema.ListStepContainer
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.schema.Plan
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.schema.PlanOutputParser
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.schema.Step
Create a new model by parsing and validating input data from keyword arguments.
experimental.plan_and_execute.schema.StepResponse
Create a new model by parsing and validating input data from keyword arguments.
Functions¶
experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input(...)
Preprocesses a string to be parsed as json.
experimental.autonomous_agents.autogpt.prompt_generator.get_prompt(tools)
This function generates a prompt string.
experimental.llms.jsonformer_decoder.import_jsonformer()
Lazily import jsonformer.
experimental.llms.rellm_decoder.import_rellm()
Lazily import rellm. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-37 | Lazily import rellm.
experimental.plan_and_execute.executors.agent_executor.load_agent_executor(...)
Load an agent executor.
experimental.plan_and_execute.planners.chat_planner.load_chat_planner(llm)
Load a chat planner.
langchain.formatting: Formatting¶
Utilities for formatting strings.
Classes¶
formatting.StrictFormatter()
A subclass of formatter that checks for extra keys.
langchain.graphs: Graphs¶
Graph implementations.
Classes¶
graphs.networkx_graph.KnowledgeTriple(...)
A triple in the graph.
Functions¶
graphs.networkx_graph.get_entities(entity_str)
Extract entities from entity string.
graphs.networkx_graph.parse_triples(...)
Parse knowledge triples from the knowledge string.
langchain.indexes: Indexes¶
All index utils.
Classes¶
indexes.graph.GraphIndexCreator
Functionality to create graph index.
indexes.vectorstore.VectorStoreIndexWrapper
Wrapper around a vectorstore for easy access.
indexes.vectorstore.VectorstoreIndexCreator
Logic for creating indexes.
langchain.input: Input¶
Handle chained inputs.
Functions¶
input.get_bolded_text(text)
Get bolded text.
input.get_color_mapping(items[, excluded_colors])
Get mapping for items to a support color.
input.get_colored_text(text, color)
Get colored text.
input.print_text(text[, color, end, file])
Print text with highlighting and no end characters.
langchain.llms: LLMs¶
Wrappers on top of large language models APIs.
Classes¶
llms.ai21.AI21
Wrapper around AI21 large language models.
llms.ai21.AI21PenaltyData
Parameters for AI21 penalty data.
llms.aleph_alpha.AlephAlpha
Wrapper around Aleph Alpha large language models.
llms.amazon_api_gateway.AmazonAPIGateway | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-38 | llms.amazon_api_gateway.AmazonAPIGateway
Wrapper around custom Amazon API Gateway
llms.anthropic.Anthropic
Wrapper around Anthropic's large language models.
llms.anyscale.Anyscale
Wrapper around Anyscale Services.
llms.aviary.Aviary
Allow you to use an Aviary.
llms.azureml_endpoint.AzureMLEndpointClient(...)
Wrapper around AzureML Managed Online Endpoint Client.
llms.azureml_endpoint.AzureMLOnlineEndpoint
Wrapper around Azure ML Hosted models using Managed Online Endpoints.
llms.azureml_endpoint.DollyContentFormatter()
Content handler for the Dolly-v2-12b model
llms.azureml_endpoint.HFContentFormatter()
Content handler for LLMs from the HuggingFace catalog.
llms.azureml_endpoint.OSSContentFormatter()
Content handler for LLMs from the OSS catalog.
llms.bananadev.Banana
Wrapper around Banana large language models.
llms.base.BaseLLM
LLM wrapper should take in a prompt and return a string.
llms.base.LLM
LLM class that expect subclasses to implement a simpler call method.
llms.baseten.Baseten
Use your Baseten models in Langchain
llms.beam.Beam
Wrapper around Beam API for gpt2 large language model.
llms.bedrock.Bedrock
LLM provider to invoke Bedrock models.
llms.cerebriumai.CerebriumAI
Wrapper around CerebriumAI large language models.
llms.clarifai.Clarifai
Wrapper around Clarifai's large language models.
llms.cohere.Cohere
Wrapper around Cohere large language models.
llms.ctransformers.CTransformers
Wrapper around the C Transformers LLM interface.
llms.databricks.Databricks | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-39 | Wrapper around the C Transformers LLM interface.
llms.databricks.Databricks
LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.
llms.deepinfra.DeepInfra
Wrapper around DeepInfra deployed models.
llms.fake.FakeListLLM
Fake LLM wrapper for testing purposes.
llms.forefrontai.ForefrontAI
Wrapper around ForefrontAI large language models.
llms.google_palm.GooglePalm
Create a new model by parsing and validating input data from keyword arguments.
llms.gooseai.GooseAI
Wrapper around OpenAI large language models.
llms.gpt4all.GPT4All
Wrapper around GPT4All language models.
llms.huggingface_endpoint.HuggingFaceEndpoint
Wrapper around HuggingFaceHub Inference Endpoints.
llms.huggingface_hub.HuggingFaceHub
Wrapper around HuggingFaceHub models.
llms.huggingface_pipeline.HuggingFacePipeline
Wrapper around HuggingFace Pipeline API.
llms.huggingface_text_gen_inference.HuggingFaceTextGenInference
HuggingFace text generation inference API.
llms.human.HumanInputLLM
A LLM wrapper which returns user input as the response.
llms.llamacpp.LlamaCpp
Wrapper around the llama.cpp model.
llms.manifest.ManifestWrapper
Wrapper around HazyResearch's Manifest library.
llms.modal.Modal
Wrapper around Modal large language models.
llms.mosaicml.MosaicML
Wrapper around MosaicML's LLM inference service.
llms.nlpcloud.NLPCloud
Wrapper around NLPCloud large language models.
llms.octoai_endpoint.OctoAIEndpoint
Wrapper around OctoAI Inference Endpoints.
llms.openai.AzureOpenAI
Wrapper around Azure-specific OpenAI large language models. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-40 | llms.openai.AzureOpenAI
Wrapper around Azure-specific OpenAI large language models.
llms.openai.BaseOpenAI
Wrapper around OpenAI large language models.
llms.openai.OpenAI
Wrapper around OpenAI large language models.
llms.openai.OpenAIChat
Wrapper around OpenAI Chat large language models.
llms.openllm.IdentifyingParams
Parameters for identifying a model as a typed dict.
llms.openllm.OpenLLM
Wrapper for accessing OpenLLM, supporting both in-process model instance and remote OpenLLM servers.
llms.openlm.OpenLM
Create a new model by parsing and validating input data from keyword arguments.
llms.petals.Petals
Wrapper around Petals Bloom models.
llms.pipelineai.PipelineAI
Wrapper around PipelineAI large language models.
llms.predictionguard.PredictionGuard
Wrapper around Prediction Guard large language models.
llms.promptlayer_openai.PromptLayerOpenAI
Wrapper around OpenAI large language models.
llms.promptlayer_openai.PromptLayerOpenAIChat
Wrapper around OpenAI large language models.
llms.replicate.Replicate
Wrapper around Replicate models.
llms.rwkv.RWKV
Wrapper around RWKV language models.
llms.sagemaker_endpoint.ContentHandlerBase()
A handler class to transform input from LLM to a format that SageMaker endpoint expects.
llms.sagemaker_endpoint.LLMContentHandler()
Content handler for LLM class.
llms.sagemaker_endpoint.SagemakerEndpoint
Wrapper around custom Sagemaker Inference Endpoints.
llms.self_hosted.SelfHostedPipeline
Run model inference on self-hosted remote hardware.
llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM
Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-41 | Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.
llms.stochasticai.StochasticAI
Wrapper around StochasticAI large language models.
llms.textgen.TextGen
Wrapper around the text-generation-webui model.
llms.vertexai.VertexAI
Wrapper around Google Vertex AI large language models.
llms.writer.Writer
Wrapper around Writer large language models.
Functions¶
llms.aviary.get_completions(model, prompt[, ...])
Get completions from Aviary models.
llms.aviary.get_models()
List available models
llms.base.create_base_retry_decorator(...[, ...])
Create a retry decorator for a given LLM and provided list of error types.
llms.base.get_prompts(params, prompts)
Get prompts that are already cached.
llms.base.update_cache(existing_prompts, ...)
Update the cache and get the LLM output.
llms.cohere.completion_with_retry(llm, **kwargs)
Use tenacity to retry the completion call.
llms.databricks.get_default_api_token()
Gets the default Databricks personal access token.
llms.databricks.get_default_host()
Gets the default Databricks workspace hostname.
llms.databricks.get_repl_context()
Gets the notebook REPL context if running inside a Databricks notebook.
llms.google_palm.generate_with_retry(llm, ...)
Use tenacity to retry the completion call.
llms.loading.load_llm(file)
Load LLM from file.
llms.loading.load_llm_from_config(config)
Load LLM from Config Dict.
llms.openai.completion_with_retry(llm, **kwargs)
Use tenacity to retry the completion call.
llms.openai.update_token_usage(keys, ...)
Update token usage. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-42 | llms.openai.update_token_usage(keys, ...)
Update token usage.
llms.utils.enforce_stop_tokens(text, stop)
Cut off the text as soon as any stop words occur.
llms.vertexai.completion_with_retry(llm, ...)
Use tenacity to retry the completion call.
llms.vertexai.is_codey_model(model_name)
Returns True if the model name is a Codey model.
langchain.load: Load¶
Classes¶
load.serializable.BaseSerialized
Base class for serialized objects.
load.serializable.Serializable
Serializable base class.
load.serializable.SerializedConstructor
Serialized constructor.
load.serializable.SerializedNotImplemented
Serialized not implemented.
load.serializable.SerializedSecret
Serialized secret.
Functions¶
load.dump.default(obj)
Return a default value for a Serializable object or a SerializedNotImplemented object.
load.dump.dumpd(obj)
Return a json dict representation of an object.
load.dump.dumps(obj, *[, pretty])
Return a json string representation of an object.
load.load.loads(text, *[, secrets_map])
Load a JSON object from a string.
load.serializable.to_json_not_implemented(obj)
Serialize a "not implemented" object.
langchain.math_utils: Math Utils¶
Math utils.
Functions¶
math_utils.cosine_similarity(X, Y)
Row-wise cosine similarity between two equal-width matrices.
math_utils.cosine_similarity_top_k(X, Y[, ...])
Row-wise cosine similarity with optional top-k and score threshold filtering.
langchain.memory: Memory¶
Classes¶
memory.buffer.ConversationBufferMemory
Buffer for storing conversation memory.
memory.buffer.ConversationStringBufferMemory
Buffer for storing conversation memory.
memory.buffer_window.ConversationBufferWindowMemory
Buffer for storing conversation memory.
memory.chat_memory.BaseChatMemory | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-43 | Buffer for storing conversation memory.
memory.chat_memory.BaseChatMemory
Create a new model by parsing and validating input data from keyword arguments.
memory.chat_message_histories.cassandra.CassandraChatMessageHistory(...)
Chat message history that stores history in Cassandra.
memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(...)
Chat history backed by Azure CosmosDB.
memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory(...)
Chat message history that stores history in AWS DynamoDB.
memory.chat_message_histories.file.FileChatMessageHistory(...)
Chat message history that stores history in a local file.
memory.chat_message_histories.firestore.FirestoreChatMessageHistory(...)
Chat history backed by Google Firestore.
memory.chat_message_histories.in_memory.ChatMessageHistory
In memory implementation of chat message history.
memory.chat_message_histories.momento.MomentoChatMessageHistory(...)
Chat message history cache that uses Momento as a backend.
memory.chat_message_histories.mongodb.MongoDBChatMessageHistory(...)
Chat message history that stores history in MongoDB.
memory.chat_message_histories.postgres.PostgresChatMessageHistory(...)
Chat message history stored in a Postgres database.
memory.chat_message_histories.redis.RedisChatMessageHistory(...)
Chat message history stored in a Redis database.
memory.chat_message_histories.sql.SQLChatMessageHistory(...)
Chat message history stored in an SQL database.
memory.chat_message_histories.zep.ZepChatMessageHistory(...)
A ChatMessageHistory implementation that uses Zep as a backend.
memory.combined.CombinedMemory
Class for combining multiple memories' data together.
memory.entity.BaseEntityStore
Create a new model by parsing and validating input data from keyword arguments.
memory.entity.ConversationEntityMemory
Entity extractor & summarizer memory.
memory.entity.InMemoryEntityStore
Basic in-memory entity store.
memory.entity.RedisEntityStore
Redis-backed Entity store. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-44 | Basic in-memory entity store.
memory.entity.RedisEntityStore
Redis-backed Entity store.
memory.entity.SQLiteEntityStore
SQLite-backed Entity store
memory.kg.ConversationKGMemory
Knowledge graph memory for storing conversation memory.
memory.motorhead_memory.MotorheadMemory
Create a new model by parsing and validating input data from keyword arguments.
memory.readonly.ReadOnlySharedMemory
A memory wrapper that is read-only and cannot be changed.
memory.simple.SimpleMemory
Simple memory for storing context or other bits of information that shouldn't ever change between prompts.
memory.summary.ConversationSummaryMemory
Conversation summarizer to memory.
memory.summary.SummarizerMixin
Create a new model by parsing and validating input data from keyword arguments.
memory.summary_buffer.ConversationSummaryBufferMemory
Buffer with summarizer for storing conversation memory.
memory.token_buffer.ConversationTokenBufferMemory
Buffer for storing conversation memory.
memory.vectorstore.VectorStoreRetrieverMemory
Class for a VectorStore-backed memory object.
Functions¶
memory.chat_message_histories.sql.create_message_model(...)
Create a message model for a given table name.
memory.utils.get_prompt_input_key(inputs, ...)
Get the prompt input key.
langchain.output_parsers: Output Parsers¶
Classes¶
output_parsers.boolean.BooleanOutputParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.combining.CombiningOutputParser
Class to combine multiple output parsers into one.
output_parsers.datetime.DatetimeOutputParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.enum.EnumOutputParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.fix.OutputFixingParser
Wraps a parser and tries to fix parsing errors.
output_parsers.list.CommaSeparatedListOutputParser
Parse out comma separated lists. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-45 | output_parsers.list.CommaSeparatedListOutputParser
Parse out comma separated lists.
output_parsers.list.ListOutputParser
Class to parse the output of an LLM call to a list.
output_parsers.openai_functions.JsonKeyOutputFunctionsParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.openai_functions.JsonOutputFunctionsParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.openai_functions.OutputFunctionsParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.openai_functions.PydanticAttrOutputFunctionsParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.openai_functions.PydanticOutputFunctionsParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.pydantic.PydanticOutputParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.rail_parser.GuardrailsOutputParser
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.regex.RegexParser
Class to parse the output into a dictionary.
output_parsers.regex_dict.RegexDictParser
Class to parse the output into a dictionary.
output_parsers.retry.RetryOutputParser
Wraps a parser and tries to fix parsing errors.
output_parsers.retry.RetryWithErrorOutputParser
Wraps a parser and tries to fix parsing errors.
output_parsers.structured.ResponseSchema
Create a new model by parsing and validating input data from keyword arguments.
output_parsers.structured.StructuredOutputParser
Create a new model by parsing and validating input data from keyword arguments.
Functions¶
output_parsers.json.parse_and_check_json_markdown(...)
Parse a JSON string from a Markdown string and check that it contains the expected keys. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-46 | Parse a JSON string from a Markdown string and check that it contains the expected keys.
output_parsers.json.parse_json_markdown(...)
Parse a JSON string from a Markdown string.
output_parsers.loading.load_output_parser(config)
Load output parser.
langchain.prompts: Prompts¶
Prompt template classes.
Classes¶
prompts.base.StringPromptTemplate
String prompt should expose the format method, returning a prompt.
prompts.base.StringPromptValue
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.AIMessagePromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.BaseChatPromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.BaseMessagePromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.BaseStringMessagePromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.ChatMessagePromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.ChatPromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.ChatPromptValue
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.HumanMessagePromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
prompts.chat.MessagesPlaceholder
Prompt template that assumes variable is already list of messages.
prompts.chat.SystemMessagePromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
prompts.example_selector.base.BaseExampleSelector()
Interface for selecting examples to include in prompts.
prompts.example_selector.length_based.LengthBasedExampleSelector
Select examples based on length.
prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-47 | Select examples based on length.
prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector
Select and order examples based on ngram overlap score (sentence_bleu score).
prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector
ExampleSelector that selects examples based on Max Marginal Relevance.
prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
Example selector that selects examples based on SemanticSimilarity.
prompts.few_shot.FewShotPromptTemplate
Prompt template that contains few shot examples.
prompts.few_shot_with_templates.FewShotPromptWithTemplates
Prompt template that contains few shot examples.
prompts.pipeline.PipelinePromptTemplate
A prompt template for composing multiple prompts together.
prompts.prompt.PromptTemplate
Schema to represent a prompt for an LLM.
Functions¶
prompts.base.check_valid_template(template, ...)
Check that template string is valid.
prompts.base.jinja2_formatter(template, **kwargs)
Format a template using jinja2.
prompts.base.validate_jinja2(template, ...)
Validate that the input variables are valid for the template.
prompts.example_selector.ngram_overlap.ngram_overlap_score(...)
Compute ngram overlap score of source and example as sentence_bleu score.
prompts.example_selector.semantic_similarity.sorted_values(values)
Return a list of values in dict sorted by key.
prompts.loading.load_prompt(path)
Unified method for loading a prompt from LangChainHub or local fs.
prompts.loading.load_prompt_from_config(config)
Load prompt from Config Dict.
langchain.requests: Requests¶
Lightweight wrapper around requests library, with async support.
Classes¶
requests.Requests
Wrapper around requests to handle auth and async.
requests.TextRequestsWrapper
Lightweight wrapper around requests library.
langchain.retrievers: Retrievers¶
Classes¶ | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-48 | langchain.retrievers: Retrievers¶
Classes¶
retrievers.arxiv.ArxivRetriever
It is effectively a wrapper for ArxivAPIWrapper.
retrievers.azure_cognitive_search.AzureCognitiveSearchRetriever
Wrapper around Azure Cognitive Search.
retrievers.chaindesk.ChaindeskRetriever
Retriever that uses the Chaindesk API.
retrievers.chatgpt_plugin_retriever.ChatGPTPluginRetriever
Create a new model by parsing and validating input data from keyword arguments.
retrievers.contextual_compression.ContextualCompressionRetriever
Retriever that wraps a base retriever and compresses the results.
retrievers.databerry.DataberryRetriever
Retriever that uses the Databerry API.
retrievers.docarray.DocArrayRetriever
Retriever class for DocArray Document Indices.
retrievers.docarray.SearchType(value[, ...])
Enumerator of the types of search to perform.
retrievers.document_compressors.base.BaseDocumentCompressor
Base abstraction interface for document compression.
retrievers.document_compressors.base.DocumentCompressorPipeline
Document compressor that uses a pipeline of transformers.
retrievers.document_compressors.chain_extract.LLMChainExtractor
Create a new model by parsing and validating input data from keyword arguments.
retrievers.document_compressors.chain_extract.NoOutputParser
Parse outputs that could return a null string of some sort.
retrievers.document_compressors.chain_filter.LLMChainFilter
Filter that drops documents that aren't relevant to the query.
retrievers.document_compressors.cohere_rerank.CohereRerank
Create a new model by parsing and validating input data from keyword arguments.
retrievers.document_compressors.embeddings_filter.EmbeddingsFilter
Create a new model by parsing and validating input data from keyword arguments. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-49 | Create a new model by parsing and validating input data from keyword arguments.
retrievers.elastic_search_bm25.ElasticSearchBM25Retriever
Wrapper around Elasticsearch using BM25 as a retrieval method.
retrievers.kendra.AdditionalResultAttribute
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.AdditionalResultAttributeValue
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.AmazonKendraRetriever
Retriever class to query documents from Amazon Kendra Index.
retrievers.kendra.DocumentAttribute
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.DocumentAttributeValue
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.Highlight
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.QueryResult
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.QueryResultItem
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.RetrieveResult
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.RetrieveResultItem
Create a new model by parsing and validating input data from keyword arguments.
retrievers.kendra.TextWithHighLights
Create a new model by parsing and validating input data from keyword arguments.
retrievers.knn.KNNRetriever
KNN Retriever.
retrievers.llama_index.LlamaIndexGraphRetriever
Question-answering with sources over an LlamaIndex graph data structure.
retrievers.llama_index.LlamaIndexRetriever
Question-answering with sources over an LlamaIndex data structure. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-50 | Question-answering with sources over an LlamaIndex data structure.
retrievers.merger_retriever.MergerRetriever
This class merges the results of multiple retrievers.
retrievers.metal.MetalRetriever
Retriever that uses the Metal API.
retrievers.milvus.MilvusRetriever
Retriever that uses the Milvus API.
retrievers.multi_query.LineList
Create a new model by parsing and validating input data from keyword arguments.
retrievers.multi_query.LineListOutputParser
Create a new model by parsing and validating input data from keyword arguments.
retrievers.multi_query.MultiQueryRetriever
Given a user query, use an LLM to write a set of queries.
retrievers.pinecone_hybrid_search.PineconeHybridSearchRetriever
Create a new model by parsing and validating input data from keyword arguments.
retrievers.pubmed.PubMedRetriever
It is effectively a wrapper for PubMedAPIWrapper.
retrievers.remote_retriever.RemoteLangChainRetriever
Create a new model by parsing and validating input data from keyword arguments.
retrievers.self_query.base.SelfQueryRetriever
Retriever that wraps around a vector store and uses an LLM to generate the vector store queries.
retrievers.self_query.chroma.ChromaTranslator()
Logic for converting internal query language elements to valid filters.
retrievers.self_query.myscale.MyScaleTranslator([...])
Logic for converting internal query language elements to valid filters.
retrievers.self_query.pinecone.PineconeTranslator()
Logic for converting internal query language elements to valid filters.
retrievers.self_query.qdrant.QdrantTranslator(...)
Logic for converting internal query language elements to valid filters. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-51 | Logic for converting internal query language elements to valid filters.
retrievers.self_query.weaviate.WeaviateTranslator()
Logic for converting internal query language elements to valid filters.
retrievers.svm.SVMRetriever
SVM Retriever.
retrievers.tfidf.TFIDFRetriever
Create a new model by parsing and validating input data from keyword arguments.
retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever
Retriever combining embedding similarity with recency.
retrievers.vespa_retriever.VespaRetriever
Retriever that uses the Vespa.
retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever
Retriever that uses Weaviate's hybrid search to retrieve documents.
retrievers.wikipedia.WikipediaRetriever
It is effectively a wrapper for WikipediaAPIWrapper.
retrievers.zep.ZepRetriever
A Retriever implementation for the Zep long-term memory store.
retrievers.zilliz.ZillizRetriever
Retriever that uses the Zilliz API.
Functions¶
retrievers.document_compressors.chain_extract.default_get_input(...)
Return the compression chain input.
retrievers.document_compressors.chain_filter.default_get_input(...)
Return the compression chain input.
retrievers.kendra.clean_excerpt(excerpt)
Cleans an excerpt from Kendra.
retrievers.kendra.combined_text(title, excerpt)
Combines a title and an excerpt into a single string.
retrievers.knn.create_index(contexts, embeddings)
Create an index of embeddings for a list of contexts.
retrievers.milvus.MilvusRetreiver(*args, ...)
Deprecated MilvusRetreiver. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-52 | Deprecated MilvusRetreiver.
retrievers.pinecone_hybrid_search.create_index(...)
Create a Pinecone index from a list of contexts.
retrievers.pinecone_hybrid_search.hash_text(text)
Hash a text using SHA256.
retrievers.self_query.myscale.DEFAULT_COMPOSER(op_name)
Default composer for logical operators.
retrievers.self_query.myscale.FUNCTION_COMPOSER(op_name)
Composer for functions.
retrievers.svm.create_index(contexts, embeddings)
Create an index of embeddings for a list of contexts.
retrievers.zilliz.ZillizRetreiver(*args, ...)
Deprecated ZillizRetreiver.
langchain.schema: Schema¶
Classes¶
schema.agent.AgentFinish(return_values, log)
The final return value of an ActionAgent.
schema.document.BaseDocumentTransformer()
Abstract base class for document transformation systems.
schema.document.Document
Class for storing a piece of text and associated metadata.
schema.language_model.BaseLanguageModel
Abstract base class for interfacing with language models.
schema.memory.BaseChatMessageHistory()
Abstract base class for storing chat message history.
schema.memory.BaseMemory
Base abstract class for memory in Chains.
schema.messages.AIMessage
A Message from an AI.
schema.messages.BaseMessage
The base abstract Message class.
schema.messages.ChatMessage
A Message that can be assigned an arbitrary speaker (i.e.
schema.messages.FunctionMessage
A Message for passing the result of executing a function back to a model.
schema.messages.HumanMessage
A Message from a human.
schema.messages.SystemMessage
A Message for priming AI behavior, usually passed in as the first of a sequence of input messages.
schema.output.ChatGeneration
A single chat generation output.
schema.output.ChatResult | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-53 | schema.output.ChatGeneration
A single chat generation output.
schema.output.ChatResult
Class that contains all results for a single chat model call.
schema.output.Generation
A single text generation output.
schema.output.LLMResult
Class that contains all results for a batched LLM call.
schema.output.RunInfo
Class that contains metadata for a single execution of a Chain or model.
schema.output_parser.BaseLLMOutputParser
Abstract base class for parsing the outputs of a model.
schema.output_parser.BaseOutputParser
Class to parse the output of an LLM call.
schema.output_parser.NoOpOutputParser
'No operation' OutputParser that returns the text as is.
schema.output_parser.OutputParserException(error)
Exception that output parsers should raise to signify a parsing error.
schema.prompt.PromptValue
Base abstract class for inputs to any language model.
schema.prompt_template.BasePromptTemplate
Base class for all prompt templates, returning a prompt.
schema.retriever.BaseRetriever
Abstract base class for a Document retrieval system.
Functions¶
schema.messages.get_buffer_string(messages)
Convert sequence of Messages to strings and concatenate them into one string.
schema.messages.messages_from_dict(messages)
Convert a sequence of messages from dicts to Message objects.
schema.messages.messages_to_dict(messages)
Convert a sequence of Messages to a list of dictionaries.
schema.prompt_template.format_document(doc, ...)
Format a document into a string based on a prompt template.
langchain.server: Server¶
Script to run langchain-server locally using docker-compose.
Functions¶
server.main()
Run the langchain server locally.
langchain.sql_database: Sql Database¶
SQLAlchemy wrapper around a database.
Functions¶
sql_database.truncate_word(content, *, length)
Truncate a string to a certain number of words, based on the max string length. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-54 | Truncate a string to a certain number of words, based on the max string length.
langchain.text_splitter: Text Splitter¶
Functionality for splitting text.
Classes¶
text_splitter.CharacterTextSplitter([separator])
Implementation of splitting text that looks at characters.
text_splitter.HeaderType
Header type as typed dict.
text_splitter.Language(value[, names, ...])
Enum of the programming languages.
text_splitter.LatexTextSplitter(**kwargs)
Attempts to split the text along Latex-formatted layout elements.
text_splitter.LineType
Line type as typed dict.
text_splitter.MarkdownTextSplitter(**kwargs)
Attempts to split the text along Markdown-formatted headings.
text_splitter.NLTKTextSplitter([separator])
Implementation of splitting text that looks at sentences using NLTK.
text_splitter.PythonCodeTextSplitter(**kwargs)
Attempts to split the text along Python syntax.
text_splitter.RecursiveCharacterTextSplitter([...])
Implementation of splitting text that looks at characters.
text_splitter.SentenceTransformersTokenTextSplitter([...])
Implementation of splitting text that looks at tokens.
text_splitter.SpacyTextSplitter([separator, ...])
Implementation of splitting text that looks at sentences using Spacy.
text_splitter.TextSplitter(chunk_size, ...)
Interface for splitting text into chunks.
text_splitter.TokenTextSplitter([...])
Implementation of splitting text that looks at tokens.
Functions¶
text_splitter.split_text_on_tokens(*, text, ...)
Split incoming text and return chunks.
langchain.tools: Tools¶
Core toolkit implementations.
Classes¶
tools.arxiv.tool.ArxivQueryRun
Tool that adds the capability to search using the Arxiv API. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-55 | Tool that adds the capability to search using the Arxiv API.
tools.azure_cognitive_services.form_recognizer.AzureCogsFormRecognizerTool
Tool that queries the Azure Cognitive Services Form Recognizer API.
tools.azure_cognitive_services.image_analysis.AzureCogsImageAnalysisTool
Tool that queries the Azure Cognitive Services Image Analysis API.
tools.azure_cognitive_services.speech2text.AzureCogsSpeech2TextTool
Tool that queries the Azure Cognitive Services Speech2Text API.
tools.azure_cognitive_services.text2speech.AzureCogsText2SpeechTool
Tool that queries the Azure Cognitive Services Text2Speech API.
tools.base.BaseTool
Interface LangChain tools must implement.
tools.base.SchemaAnnotationError
Raised when 'args_schema' is missing or has an incorrect type annotation.
tools.base.StructuredTool
Tool that can operate on any number of inputs.
tools.base.Tool
Tool that takes in function or coroutine directly.
tools.base.ToolException
An optional exception that tool throws when execution error occurs.
tools.base.ToolMetaclass(name, bases, dct)
Metaclass for BaseTool to ensure the provided args_schema
tools.bing_search.tool.BingSearchResults
Tool that has capability to query the Bing Search API and get back json.
tools.bing_search.tool.BingSearchRun
Tool that adds the capability to query the Bing search API.
tools.brave_search.tool.BraveSearch
Create a new model by parsing and validating input data from keyword arguments.
tools.convert_to_openai.FunctionDescription
Representation of a callable function to the OpenAI API.
tools.dataforseo_api_search.tool.DataForSeoAPISearchResults
Tool that has capability to query the DataForSeo Google Search API and get back json.
tools.dataforseo_api_search.tool.DataForSeoAPISearchRun
Tool that adds the capability to query the DataForSeo Google search API. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-56 | Tool that adds the capability to query the DataForSeo Google search API.
tools.ddg_search.tool.DuckDuckGoSearchResults
Tool that queries the Duck Duck Go Search API and get back json.
tools.ddg_search.tool.DuckDuckGoSearchRun
Tool that adds the capability to query the DuckDuckGo search API.
tools.file_management.copy.CopyFileTool
Create a new model by parsing and validating input data from keyword arguments.
tools.file_management.copy.FileCopyInput
Input for CopyFileTool.
tools.file_management.delete.DeleteFileTool
Create a new model by parsing and validating input data from keyword arguments.
tools.file_management.delete.FileDeleteInput
Input for DeleteFileTool.
tools.file_management.file_search.FileSearchInput
Input for FileSearchTool.
tools.file_management.file_search.FileSearchTool
Create a new model by parsing and validating input data from keyword arguments.
tools.file_management.list_dir.DirectoryListingInput
Input for ListDirectoryTool.
tools.file_management.list_dir.ListDirectoryTool
Create a new model by parsing and validating input data from keyword arguments.
tools.file_management.move.FileMoveInput
Input for MoveFileTool.
tools.file_management.move.MoveFileTool
Create a new model by parsing and validating input data from keyword arguments.
tools.file_management.read.ReadFileInput
Input for ReadFileTool.
tools.file_management.read.ReadFileTool
Create a new model by parsing and validating input data from keyword arguments.
tools.file_management.utils.BaseFileToolMixin
Mixin for file system tools.
tools.file_management.utils.FileValidationError
Error for paths outside the root directory.
tools.file_management.write.WriteFileInput
Input for WriteFileTool.
tools.file_management.write.WriteFileTool
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.base.GmailBaseTool
Create a new model by parsing and validating input data from keyword arguments. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-57 | Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.create_draft.CreateDraftSchema
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.create_draft.GmailCreateDraft
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.get_message.GmailGetMessage
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.get_message.SearchArgsSchema
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.get_thread.GetThreadSchema
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.get_thread.GmailGetThread
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.search.GmailSearch
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.search.Resource(value[, names, ...])
Enumerator of Resources to search.
tools.gmail.search.SearchArgsSchema
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.send_message.GmailSendMessage
Create a new model by parsing and validating input data from keyword arguments.
tools.gmail.send_message.SendMessageSchema
Create a new model by parsing and validating input data from keyword arguments.
tools.google_places.tool.GooglePlacesSchema
Create a new model by parsing and validating input data from keyword arguments.
tools.google_places.tool.GooglePlacesTool
Tool that adds the capability to query the Google places API.
tools.google_search.tool.GoogleSearchResults
Tool that has capability to query the Google Search API and get back json.
tools.google_search.tool.GoogleSearchRun
Tool that adds the capability to query the Google search API.
tools.google_serper.tool.GoogleSerperResults
Tool that has capability to query the Serper.dev Google Search API and get back json.
tools.google_serper.tool.GoogleSerperRun | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-58 | tools.google_serper.tool.GoogleSerperRun
Tool that adds the capability to query the Serper.dev Google search API.
tools.graphql.tool.BaseGraphQLTool
Base tool for querying a GraphQL API.
tools.human.tool.HumanInputRun
Tool that adds the capability to ask user for input.
tools.ifttt.IFTTTWebhook
IFTTT Webhook.
tools.jira.tool.JiraAction
Create a new model by parsing and validating input data from keyword arguments.
tools.json.tool.JsonGetValueTool
Tool for getting a value in a JSON spec.
tools.json.tool.JsonListKeysTool
Tool for listing keys in a JSON spec.
tools.json.tool.JsonSpec
Base class for JSON spec.
tools.metaphor_search.tool.MetaphorSearchResults
Tool that has capability to query the Metaphor Search API and get back json.
tools.office365.base.O365BaseTool
Create a new model by parsing and validating input data from keyword arguments.
tools.office365.create_draft_message.CreateDraftMessageSchema
Create a new model by parsing and validating input data from keyword arguments.
tools.office365.create_draft_message.O365CreateDraftMessage
Create a new model by parsing and validating input data from keyword arguments.
tools.office365.events_search.O365SearchEvents
Class for searching calendar events in Office 365
tools.office365.events_search.SearchEventsInput
Input for SearchEmails Tool.
tools.office365.messages_search.O365SearchEmails
Class for searching email messages in Office 365
tools.office365.messages_search.SearchEmailsInput
Input for SearchEmails Tool.
tools.office365.send_event.O365SendEvent
Create a new model by parsing and validating input data from keyword arguments.
tools.office365.send_event.SendEventSchema
Input for CreateEvent Tool.
tools.office365.send_message.O365SendMessage | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-59 | Input for CreateEvent Tool.
tools.office365.send_message.O365SendMessage
Create a new model by parsing and validating input data from keyword arguments.
tools.office365.send_message.SendMessageSchema
Create a new model by parsing and validating input data from keyword arguments.
tools.openapi.utils.api_models.APIOperation
A model for a single API operation.
tools.openapi.utils.api_models.APIProperty
A model for a property in the query, path, header, or cookie params.
tools.openapi.utils.api_models.APIPropertyBase
Base model for an API property.
tools.openapi.utils.api_models.APIPropertyLocation(value)
The location of the property.
tools.openapi.utils.api_models.APIRequestBody
A model for a request body.
tools.openapi.utils.api_models.APIRequestBodyProperty
A model for a request body property.
tools.openweathermap.tool.OpenWeatherMapQueryRun
Tool that adds the capability to query using the OpenWeatherMap API.
tools.playwright.base.BaseBrowserTool
Base class for browser tools.
tools.playwright.click.ClickTool
Create a new model by parsing and validating input data from keyword arguments.
tools.playwright.click.ClickToolInput
Input for ClickTool.
tools.playwright.current_page.CurrentWebPageTool
Create a new model by parsing and validating input data from keyword arguments.
tools.playwright.extract_hyperlinks.ExtractHyperlinksTool
Extract all hyperlinks on the page.
tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput
Input for ExtractHyperlinksTool.
tools.playwright.extract_text.ExtractTextTool
Create a new model by parsing and validating input data from keyword arguments.
tools.playwright.get_elements.GetElementsTool
Create a new model by parsing and validating input data from keyword arguments.
tools.playwright.get_elements.GetElementsToolInput
Input for GetElementsTool.
tools.playwright.navigate.NavigateTool
Create a new model by parsing and validating input data from keyword arguments. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-60 | Create a new model by parsing and validating input data from keyword arguments.
tools.playwright.navigate.NavigateToolInput
Input for NavigateToolInput.
tools.playwright.navigate_back.NavigateBackTool
Navigate back to the previous page in the browser history.
tools.plugin.AIPlugin
AI Plugin Definition.
tools.plugin.AIPluginTool
Create a new model by parsing and validating input data from keyword arguments.
tools.plugin.AIPluginToolSchema
AIPLuginToolSchema.
tools.plugin.ApiConfig
Create a new model by parsing and validating input data from keyword arguments.
tools.powerbi.tool.InfoPowerBITool
Tool for getting metadata about a PowerBI Dataset.
tools.powerbi.tool.ListPowerBITool
Tool for getting tables names.
tools.powerbi.tool.QueryPowerBITool
Tool for querying a Power BI Dataset.
tools.pubmed.tool.PubmedQueryRun
Tool that adds the capability to search using the PubMed API.
tools.python.tool.PythonAstREPLTool
A tool for running python code in a REPL.
tools.python.tool.PythonREPLTool
A tool for running python code in a REPL.
tools.requests.tool.BaseRequestsTool
Base class for requests tools.
tools.requests.tool.RequestsDeleteTool
Tool for making a DELETE request to an API endpoint.
tools.requests.tool.RequestsGetTool
Tool for making a GET request to an API endpoint.
tools.requests.tool.RequestsPatchTool
Tool for making a PATCH request to an API endpoint.
tools.requests.tool.RequestsPostTool
Tool for making a POST request to an API endpoint.
tools.requests.tool.RequestsPutTool
Tool for making a PUT request to an API endpoint.
tools.scenexplain.tool.SceneXplainInput
Input for SceneXplain.
tools.scenexplain.tool.SceneXplainTool
Tool that adds the capability to explain images.
tools.searx_search.tool.SearxSearchResults | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-61 | tools.searx_search.tool.SearxSearchResults
Tool that has the capability to query a Searx instance and get back json.
tools.searx_search.tool.SearxSearchRun
Tool that adds the capability to query a Searx instance.
tools.shell.tool.ShellInput
Commands for the Bash Shell tool.
tools.shell.tool.ShellTool
Tool to run shell commands.
tools.sleep.tool.SleepInput
Input for CopyFileTool.
tools.sleep.tool.SleepTool
Tool that adds the capability to sleep.
tools.spark_sql.tool.BaseSparkSQLTool
Base tool for interacting with Spark SQL.
tools.spark_sql.tool.InfoSparkSQLTool
Tool for getting metadata about a Spark SQL.
tools.spark_sql.tool.ListSparkSQLTool
Tool for getting tables names.
tools.spark_sql.tool.QueryCheckerTool
Use an LLM to check if a query is correct.
tools.spark_sql.tool.QuerySparkSQLTool
Tool for querying a Spark SQL.
tools.sql_database.tool.BaseSQLDatabaseTool
Base tool for interacting with a SQL database.
tools.sql_database.tool.InfoSQLDatabaseTool
Tool for getting metadata about a SQL database.
tools.sql_database.tool.ListSQLDatabaseTool
Tool for getting tables names.
tools.sql_database.tool.QuerySQLCheckerTool
Use an LLM to check if a query is correct.
tools.sql_database.tool.QuerySQLDataBaseTool
Tool for querying a SQL database.
tools.steamship_image_generation.tool.ModelName(value)
Supported Image Models for generation.
tools.steamship_image_generation.tool.SteamshipImageGenerationTool
Tool used to generate images from a text-prompt.
tools.vectorstore.tool.BaseVectorStoreTool
Base class for tools that use a VectorStore.
tools.vectorstore.tool.VectorStoreQATool
Tool for the VectorDBQA chain.
tools.vectorstore.tool.VectorStoreQAWithSourcesTool | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-62 | Tool for the VectorDBQA chain.
tools.vectorstore.tool.VectorStoreQAWithSourcesTool
Tool for the VectorDBQAWithSources chain.
tools.wikipedia.tool.WikipediaQueryRun
Tool that adds the capability to search using the Wikipedia API.
tools.wolfram_alpha.tool.WolframAlphaQueryRun
Tool that adds the capability to query using the Wolfram Alpha SDK.
tools.youtube.search.YouTubeSearchTool
Create a new model by parsing and validating input data from keyword arguments.
tools.zapier.tool.ZapierNLAListActions
Returns a list of all exposed (enabled) actions associated with
tools.zapier.tool.ZapierNLARunAction
Executes an action that is identified by action_id, must be exposed
Functions¶
tools.azure_cognitive_services.utils.detect_file_src_type(...)
Detect if the file is local or remote.
tools.azure_cognitive_services.utils.download_audio_from_url(...)
Download audio from url to local.
tools.base.create_schema_from_function(...)
Create a pydantic schema from a function's signature.
tools.base.tool(*args[, return_direct, ...])
Make tools out of functions, can be used with or without arguments.
tools.convert_to_openai.format_tool_to_openai_function(tool)
Format tool into the OpenAI function API.
tools.ddg_search.tool.DuckDuckGoSearchTool(...)
Deprecated.
tools.file_management.utils.get_validated_relative_path(...)
Resolve a relative path, raising an error if not within the root directory.
tools.file_management.utils.is_relative_to(...)
Check if path is relative to root.
tools.gmail.utils.build_resource_service([...])
Build a Gmail service.
tools.gmail.utils.clean_email_body(body)
Clean email body.
tools.gmail.utils.get_gmail_credentials([...])
Get credentials.
tools.gmail.utils.import_google()
Import google libraries. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-63 | Get credentials.
tools.gmail.utils.import_google()
Import google libraries.
tools.gmail.utils.import_googleapiclient_resource_builder()
Import googleapiclient.discovery.build function.
tools.gmail.utils.import_installed_app_flow()
Import InstalledAppFlow class.
tools.interaction.tool.StdInInquireTool(...)
Tool for asking the user for input.
tools.office365.utils.authenticate()
Authenticate using the Microsoft Grah API
tools.office365.utils.clean_body(body)
Clean body of a message or event.
tools.playwright.base.lazy_import_playwright_browsers()
Lazy import playwright browsers.
tools.playwright.utils.create_async_playwright_browser([...])
Create an async playwright browser.
tools.playwright.utils.create_sync_playwright_browser([...])
Create a playwright browser.
tools.playwright.utils.get_current_page(browser)
Get the current page of the browser.
tools.playwright.utils.run_async(coro)
Run an async coroutine.
tools.plugin.marshal_spec(txt)
Convert the yaml or json serialized spec to a dict.
tools.python.tool.sanitize_input(query)
Sanitize input to the python REPL.
tools.steamship_image_generation.utils.make_image_public(...)
Upload a block to a signed URL and return the public URL.
langchain.utilities: Utilities¶
General utilities.
Classes¶
utilities.apify.ApifyWrapper
Wrapper around Apify.
utilities.arxiv.ArxivAPIWrapper
Wrapper around ArxivAPI.
utilities.awslambda.LambdaWrapper
Wrapper for AWS Lambda SDK.
utilities.bibtex.BibtexparserWrapper
Wrapper around bibtexparser.
utilities.bing_search.BingSearchAPIWrapper
Wrapper for Bing Search API.
utilities.brave_search.BraveSearchWrapper
Create a new model by parsing and validating input data from keyword arguments.
utilities.dataforseo_api_search.DataForSeoAPIWrapper | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-64 | utilities.dataforseo_api_search.DataForSeoAPIWrapper
Create a new model by parsing and validating input data from keyword arguments.
utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper
Wrapper for DuckDuckGo Search API.
utilities.google_places_api.GooglePlacesAPIWrapper
Wrapper around Google Places API.
utilities.google_search.GoogleSearchAPIWrapper
Wrapper for Google Search API.
utilities.google_serper.GoogleSerperAPIWrapper
Wrapper around the Serper.dev Google Search API.
utilities.graphql.GraphQLAPIWrapper
Wrapper around GraphQL API.
utilities.jira.JiraAPIWrapper
Wrapper for Jira API.
utilities.metaphor_search.MetaphorSearchAPIWrapper
Wrapper for Metaphor Search API.
utilities.openapi.HTTPVerb(value[, names, ...])
Enumerator of the HTTP verbs.
utilities.openapi.OpenAPISpec
OpenAPI Model that removes misformatted parts of the spec.
utilities.openweathermap.OpenWeatherMapAPIWrapper
Wrapper for OpenWeatherMap API using PyOWM.
utilities.powerbi.PowerBIDataset
Create PowerBI engine from dataset ID and credential or token.
utilities.pupmed.PubMedAPIWrapper
Wrapper around PubMed API.
utilities.python.PythonREPL
Simulates a standalone Python REPL.
utilities.scenexplain.SceneXplainAPIWrapper
Wrapper for SceneXplain API.
utilities.searx_search.SearxResults(data)
Dict like wrapper around search api results.
utilities.searx_search.SearxSearchWrapper
Wrapper for Searx API.
utilities.serpapi.SerpAPIWrapper
Wrapper around SerpAPI.
utilities.twilio.TwilioAPIWrapper
Messaging Client using Twilio.
utilities.wikipedia.WikipediaAPIWrapper
Wrapper around WikipediaAPI.
utilities.wolfram_alpha.WolframAlphaAPIWrapper
Wrapper for Wolfram Alpha. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-65 | utilities.wolfram_alpha.WolframAlphaAPIWrapper
Wrapper for Wolfram Alpha.
utilities.zapier.ZapierNLAWrapper
Wrapper for Zapier NLA.
Functions¶
utilities.loading.try_load_from_hub(path, ...)
Load configuration from hub.
utilities.powerbi.fix_table_name(table)
Add single quotes around table names that contain spaces.
utilities.powerbi.json_to_md(json_contents)
Converts a JSON object to a markdown table.
utilities.vertexai.init_vertexai([project, ...])
Init vertexai.
utilities.vertexai.raise_vertex_import_error()
Raise ImportError related to Vertex SDK being not available.
langchain.utils: Utils¶
Generic utility functions.
Functions¶
utils.check_package_version(package[, ...])
Check the version of a package.
utils.comma_list(items)
utils.get_from_dict_or_env(data, key, env_key)
Get a value from a dictionary or an environment variable.
utils.get_from_env(key, env_key[, default])
Get a value from a dictionary or an environment variable.
utils.guard_import(module_name, *[, ...])
Dynamically imports a module and raises a helpful exception if the module is not installed.
utils.mock_now(dt_value)
Context manager for mocking out datetime.now() in unit tests.
utils.raise_for_status_with_text(response)
Raise an error with the response text.
utils.stringify_dict(data)
Stringify a dictionary.
utils.stringify_value(val)
Stringify a value.
utils.xor_args(*arg_groups)
Validate specified keyword args are mutually exclusive.
langchain.vectorstores: Vectorstores¶
Wrappers on top of vector stores.
Classes¶
vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch(...)
Alibaba Cloud OpenSearch Vector Store | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-66 | Alibaba Cloud OpenSearch Vector Store
vectorstores.analyticdb.AnalyticDB(...[, ...])
VectorStore implementation using AnalyticDB.
vectorstores.annoy.Annoy(embedding_function, ...)
Wrapper around Annoy vector database.
vectorstores.atlas.AtlasDB(name[, ...])
Wrapper around Atlas: Nomic's neural database and rhizomatic instrument.
vectorstores.awadb.AwaDB([table_name, ...])
Interface implemented by AwaDB vector stores.
vectorstores.azuresearch.AzureSearch(...[, ...])
Initialize with necessary components.
vectorstores.azuresearch.AzureSearchVectorStoreRetriever
Create a new model by parsing and validating input data from keyword arguments.
vectorstores.base.VectorStore()
Interface for vector stores.
vectorstores.base.VectorStoreRetriever
Create a new model by parsing and validating input data from keyword arguments.
vectorstores.cassandra.Cassandra(embedding, ...)
Wrapper around Cassandra embeddings platform.
vectorstores.chroma.Chroma([...])
Wrapper around ChromaDB embeddings platform.
vectorstores.clarifai.Clarifai([user_id, ...])
Wrapper around Clarifai AI platform's vector store.
vectorstores.clickhouse.Clickhouse(embedding)
Wrapper around ClickHouse vector database
vectorstores.clickhouse.ClickhouseSettings
ClickHouse Client Configuration
vectorstores.deeplake.DeepLake([...])
Wrapper around Deep Lake, a data lake for deep learning applications.
vectorstores.docarray.base.DocArrayIndex(...)
Initialize a vector store from DocArray's DocIndex.
vectorstores.docarray.hnsw.DocArrayHnswSearch(...)
Wrapper around HnswLib storage.
vectorstores.docarray.in_memory.DocArrayInMemorySearch(...)
Wrapper around in-memory storage for exact search.
vectorstores.elastic_vector_search.ElasticKnnSearch(...) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-67 | vectorstores.elastic_vector_search.ElasticKnnSearch(...)
A class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index.
vectorstores.elastic_vector_search.ElasticVectorSearch(...)
Wrapper around Elasticsearch as a vector database.
vectorstores.faiss.FAISS(embedding_function, ...)
Wrapper around FAISS vector database.
vectorstores.hologres.Hologres(...[, ndims, ...])
VectorStore implementation using Hologres.
vectorstores.lancedb.LanceDB(connection, ...)
Wrapper around LanceDB vector database.
vectorstores.marqo.Marqo(client, index_name)
Wrapper around Marqo database.
vectorstores.matching_engine.MatchingEngine(...)
Vertex Matching Engine implementation of the vector store.
vectorstores.milvus.Milvus(embedding_function)
Initialize wrapper around the milvus vector database.
vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch(...)
Wrapper around MongoDB Atlas Vector Search.
vectorstores.myscale.MyScale(embedding[, config])
Wrapper around MyScale vector database
vectorstores.myscale.MyScaleSettings
MyScale Client Configuration
vectorstores.opensearch_vector_search.OpenSearchVectorSearch(...)
Wrapper around OpenSearch as a vector database.
vectorstores.pgembedding.BaseModel(**kwargs)
A simple constructor that allows initialization from kwargs.
vectorstores.pgembedding.CollectionStore(...)
A simple constructor that allows initialization from kwargs.
vectorstores.pgembedding.EmbeddingStore(**kwargs)
A simple constructor that allows initialization from kwargs.
vectorstores.pgembedding.PGEmbedding(...[, ...]) | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-68 | vectorstores.pgembedding.PGEmbedding(...[, ...])
VectorStore implementation using Postgres and the pg_embedding extension. pg_embedding uses sequential scan by default. but you can create a HNSW index using the create_hnsw_index method. - connection_string is a postgres connection string. - embedding_function any embedding function implementing langchain.embeddings.base.Embeddings interface. - collection_name is the name of the collection to use. (default: langchain) - NOTE: This is not the name of the table, but the name of the collection. The tables will be created when initializing the store (if not exists) So, make sure the user has the right permissions to create tables. - distance_strategy is the distance strategy to use. (default: EUCLIDEAN) - EUCLIDEAN is the euclidean distance. - pre_delete_collection if True, will delete the collection if it exists. (default: False) - Useful for testing.
vectorstores.pgvector.BaseModel(**kwargs)
A simple constructor that allows initialization from kwargs.
vectorstores.pgvector.CollectionStore(**kwargs)
A simple constructor that allows initialization from kwargs.
vectorstores.pgvector.DistanceStrategy(value)
Enumerator of the Distance strategies.
vectorstores.pgvector.PGVector(...[, ...])
VectorStore implementation using Postgres and pgvector.
vectorstores.pinecone.Pinecone(index, ...[, ...])
Wrapper around Pinecone vector database.
vectorstores.qdrant.Qdrant(client, ...[, ...])
Wrapper around Qdrant vector database.
vectorstores.redis.Redis(redis_url, ...)
Wrapper around Redis vector database.
vectorstores.redis.RedisVectorStoreRetriever
Create a new model by parsing and validating input data from keyword arguments.
vectorstores.rocksetdb.Rockset(client, ...)
Wrapper arpund Rockset vector database. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-69 | Wrapper arpund Rockset vector database.
vectorstores.singlestoredb.DistanceStrategy(value)
Enumerator of the Distance strategies for SingleStoreDB.
vectorstores.singlestoredb.SingleStoreDB(...)
This class serves as a Pythonic interface to the SingleStore DB database.
vectorstores.singlestoredb.SingleStoreDBRetriever
Retriever for SingleStoreDB vector stores.
vectorstores.sklearn.BaseSerializer(persist_path)
Abstract base class for saving and loading data.
vectorstores.sklearn.BsonSerializer(persist_path)
Serializes data in binary json using the bson python package.
vectorstores.sklearn.JsonSerializer(persist_path)
Serializes data in json using the json package from python standard library.
vectorstores.sklearn.ParquetSerializer(...)
Serializes data in Apache Parquet format using the pyarrow package.
vectorstores.sklearn.SKLearnVectorStore(...)
A simple in-memory vector store based on the scikit-learn library NearestNeighbors implementation.
vectorstores.sklearn.SKLearnVectorStoreException
Exception raised by SKLearnVectorStore.
vectorstores.starrocks.StarRocks(embedding)
Wrapper around StarRocks vector database
vectorstores.starrocks.StarRocksSettings
StarRocks Client Configuration
vectorstores.supabase.SupabaseVectorStore(...)
VectorStore for a Supabase postgres database.
vectorstores.tair.Tair(embedding_function, ...)
Wrapper around Tair Vector store.
vectorstores.tigris.Tigris(client, ...)
Initialize Tigris vector store
vectorstores.typesense.Typesense(...[, ...])
Wrapper around Typesense vector search.
vectorstores.vectara.Vectara([...])
Implementation of Vector Store using Vectara.
vectorstores.vectara.VectaraRetriever
Create a new model by parsing and validating input data from keyword arguments. | https://api.python.langchain.com/en/latest/api_reference.html |
38cccc443c5d-70 | Create a new model by parsing and validating input data from keyword arguments.
vectorstores.weaviate.Weaviate(client, ...)
Wrapper around Weaviate vector database.
vectorstores.zilliz.Zilliz(embedding_function)
Initialize wrapper around the Zilliz vector database.
Functions¶
vectorstores.alibabacloud_opensearch.create_metadata(fields)
Create metadata from fields.
vectorstores.annoy.dependable_annoy_import()
Import annoy if available, otherwise raise error.
vectorstores.clickhouse.has_mul_sub_str(s, *args)
Check if a string contains multiple substrings.
vectorstores.faiss.dependable_faiss_import([...])
Import faiss if available, otherwise raise error.
vectorstores.myscale.has_mul_sub_str(s, *args)
Check if a string contains multiple substrings.
vectorstores.starrocks.debug_output(s)
Print a debug message if DEBUG is True.
vectorstores.starrocks.get_named_result(...)
Get a named result from a query.
vectorstores.starrocks.has_mul_sub_str(s, *args)
Check if a string has multiple substrings.
vectorstores.utils.maximal_marginal_relevance(...)
Calculate maximal marginal relevance. | https://api.python.langchain.com/en/latest/api_reference.html |
e7dbc5d007f9-0 | langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings¶
class langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = <function _embed_documents>, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable = <function load_embedding_model>, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'InstructorEmbedding', 'torch'], inference_kwargs: ~typing.Any = None, model_id: str = 'hkunlp/instructor-large', embed_instruction: str = 'Represent the document for retrieval: ', query_instruction: str = 'Represent the question for retrieving supporting documents: ')[source]¶
Bases: SelfHostedHuggingFaceEmbeddings
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html |
e7dbc5d007f9-1 | To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param embed_instruction: str = 'Represent the document for retrieval: '¶
Instruction to use for embedding documents.
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _embed_documents>¶
Inference function to extract the embeddings.
param inference_kwargs: Any = None¶
Any kwargs to pass to the model’s inference function.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_id: str = 'hkunlp/instructor-large'¶
Model name to use.
param model_load_fn: Callable = <function load_embedding_model>¶
Function to load the model remotely on the server.
param model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']¶
Requirements to install on hardware to inference the model.
param pipeline_ref: Any = None¶
param query_instruction: str = 'Represent the question for retrieving supporting documents: '¶
Instruction to use for embedding query.
param tags: Optional[List[str]] = None¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html |
e7dbc5d007f9-2 | Instruction to use for embedding query.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models). | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html |
e7dbc5d007f9-3 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
dict(**kwargs: Any) → Dict¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html |
e7dbc5d007f9-4 | Returns
Top model prediction as a message.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html |
e7dbc5d007f9-5 | Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html |
e7dbc5d007f9-6 | Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html |
e7dbc5d007f9-7 | to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html |
162c53aa0c60-0 | langchain.embeddings.embaas.EmbaasEmbeddingsPayload¶
class langchain.embeddings.embaas.EmbaasEmbeddingsPayload[source]¶
Bases: TypedDict
Payload for the embaas embeddings API.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
model
texts
instruction
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddingsPayload.html |
162c53aa0c60-1 | keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
instruction: str¶
model: str¶
texts: List[str]¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddingsPayload.html |
097789db18e6-0 | langchain.embeddings.google_palm.embed_with_retry¶
langchain.embeddings.google_palm.embed_with_retry(embeddings: GooglePalmEmbeddings, *args: Any, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.google_palm.embed_with_retry.html |
51c6c4014f23-0 | langchain.embeddings.minimax.embed_with_retry¶
langchain.embeddings.minimax.embed_with_retry(embeddings: MiniMaxEmbeddings, *args: Any, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.embed_with_retry.html |
ec8d6b3e9d64-0 | langchain.embeddings.jina.JinaEmbeddings¶
class langchain.embeddings.jina.JinaEmbeddings(*, client: Any = None, model_name: str = 'ViT-B-32::openai', jina_auth_token: Optional[str] = None, jina_api_url: str = 'https://api.clip.jina.ai/api/v1/models/', request_headers: Optional[dict] = None)[source]¶
Bases: BaseModel, Embeddings
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param jina_api_url: str = 'https://api.clip.jina.ai/api/v1/models/'¶
param jina_auth_token: Optional[str] = None¶
param model_name: str = 'ViT-B-32::openai'¶
Model name to use.
param request_headers: Optional[dict] = None¶
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Jina’s embedding endpoint.
:param texts: The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Jina’s embedding endpoint.
:param text: The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that auth token exists in environment. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.jina.JinaEmbeddings.html |
cd7b9bf0befc-0 | langchain.embeddings.elasticsearch.ElasticsearchEmbeddings¶
class langchain.embeddings.elasticsearch.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]¶
Bases: Embeddings
Wrapper around Elasticsearch embedding models.
This class provides an interface to generate embeddings using a model deployed
in an Elasticsearch cluster. It requires an Elasticsearch connection object
and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
Initialize the ElasticsearchEmbeddings instance.
Parameters
client (MlClient) – An Elasticsearch ML client object.
model_id (str) – The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) – The name of the key for the input text field in the
document. Defaults to ‘text_field’.
Methods
__init__(client, model_id, *[, input_field])
Initialize the ElasticsearchEmbeddings instance.
aembed_documents(texts)
Embed search docs.
aembed_query(text)
Embed query text.
embed_documents(texts)
Generate embeddings for a list of documents.
embed_query(text)
Generate an embedding for a single query text.
from_credentials(model_id, *[, es_cloud_id, ...])
Instantiate embeddings from Elasticsearch credentials.
from_es_connection(model_id, es_connection)
Instantiate embeddings from an existing Elasticsearch connection.
async aembed_documents(texts: List[str]) → List[List[float]]¶
Embed search docs.
async aembed_query(text: str) → List[float]¶
Embed query text. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html |
cd7b9bf0befc-1 | async aembed_query(text: str) → List[float]¶
Embed query text.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Generate embeddings for a list of documents.
Parameters
texts (List[str]) – A list of document text strings to generate embeddings
for.
Returns
A list of embeddings, one for each document in the inputlist.
Return type
List[List[float]]
embed_query(text: str) → List[float][source]¶
Generate an embedding for a single query text.
Parameters
text (str) – The query text to generate an embedding for.
Returns
The embedding for the input query text.
Return type
List[float]
classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') → ElasticsearchEmbeddings[source]¶
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) – The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) – The name of the key for the input text field in the
document. Defaults to ‘text_field’.
es_cloud_id – (str, optional): The Elasticsearch cloud ID to connect to.
es_user – (str, optional): Elasticsearch username.
es_password – (str, optional): Elasticsearch password.
Example
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html |
cd7b9bf0befc-2 | # ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically
# pulled in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field,
# es_cloud_id="foo",
# es_user="bar",
# es_password="baz",
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
classmethod from_es_connection(model_id: str, es_connection: Elasticsearch, input_field: str = 'text_field') → ElasticsearchEmbeddings[source]¶
Instantiate embeddings from an existing Elasticsearch connection.
This method provides a way to create an instance of the ElasticsearchEmbeddings
class using an existing Elasticsearch connection. The connection object is used
to create an MlClient, which is then used to initialize the
ElasticsearchEmbeddings instance.
Args:
model_id (str): The model_id of the model deployed in the Elasticsearch cluster.
es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch
connection object. input_field (str, optional): The name of the key for the
input text field in the document. Defaults to ‘text_field’.
Returns:
ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.
Example
from elasticsearch import Elasticsearch
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=["localhost:9200"], http_auth=("user", "password")
)
# Instantiate ElasticsearchEmbeddings using the existing connection | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html |
cd7b9bf0befc-3 | )
# Instantiate ElasticsearchEmbeddings using the existing connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
input_field=input_field,
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents) | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.elasticsearch.ElasticsearchEmbeddings.html |
bbcdfc6d4e24-0 | langchain.embeddings.openai.embed_with_retry¶
langchain.embeddings.openai.embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) → Any[source]¶
Use tenacity to retry the embedding call. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.embed_with_retry.html |
412ef3582522-0 | langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings¶
class langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings(*, embed: Any = None, model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source]¶
Bases: BaseModel, Embeddings
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
Initialize the tensorflow_hub and tensorflow_text.
param model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'¶
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings.html |
836040cc28d8-0 | langchain.embeddings.llamacpp.LlamaCppEmbeddings¶
class langchain.embeddings.llamacpp.LlamaCppEmbeddings(*, client: Any = None, model_path: str, n_ctx: int = 512, n_parts: int = - 1, seed: int = - 1, f16_kv: bool = False, logits_all: bool = False, vocab_only: bool = False, use_mlock: bool = False, n_threads: Optional[int] = None, n_batch: Optional[int] = 8, n_gpu_layers: Optional[int] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param f16_kv: bool = False¶
Use half-precision for key/value cache.
param logits_all: bool = False¶
Return logits for all tokens, not just the last token.
param model_path: str [Required]¶
param n_batch: Optional[int] = 8¶
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
param n_ctx: int = 512¶
Token context window.
param n_gpu_layers: Optional[int] = None¶
Number of layers to be loaded into gpu memory. Default None.
param n_parts: int = -1¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html |
836040cc28d8-1 | param n_parts: int = -1¶
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
param n_threads: Optional[int] = None¶
Number of threads to use. If None, the number
of threads is automatically determined.
param seed: int = -1¶
Seed. If -1, a random seed is used.
param use_mlock: bool = False¶
Force system to keep model in RAM.
param vocab_only: bool = False¶
Only load the vocabulary, no weights.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed a list of documents using the Llama model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a query using the Llama model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that llama-cpp-python library is installed.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html |
7ff68c7a4aad-0 | langchain.embeddings.fake.FakeEmbeddings¶
class langchain.embeddings.fake.FakeEmbeddings(*, size: int)[source]¶
Bases: Embeddings, BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param size: int [Required]¶
async aembed_documents(texts: List[str]) → List[List[float]]¶
Embed search docs.
async aembed_query(text: str) → List[float]¶
Embed query text.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
embed_query(text: str) → List[float][source]¶
Embed query text. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.fake.FakeEmbeddings.html |
9472dabe824a-0 | langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings¶
class langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings(*, client: Any = None, model_name: str = 'hkunlp/instructor-large', cache_folder: Optional[str] = None, model_kwargs: Dict[str, Any] = None, encode_kwargs: Dict[str, Any] = None, embed_instruction: str = 'Represent the document for retrieval: ', query_instruction: str = 'Represent the question for retrieving supporting documents: ')[source]¶
Bases: BaseModel, Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Initialize the sentence_transformer.
param cache_folder: Optional[str] = None¶
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
param embed_instruction: str = 'Represent the document for retrieval: '¶
Instruction to use for embedding documents.
param encode_kwargs: Dict[str, Any] [Optional]¶
Key word arguments to pass when calling the encode method of the model.
param model_kwargs: Dict[str, Any] [Optional]¶
Key word arguments to pass to the model.
param model_name: str = 'hkunlp/instructor-large'¶
Model name to use.
param query_instruction: str = 'Represent the question for retrieving supporting documents: '¶
Instruction to use for embedding query. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings.html |
9472dabe824a-1 | Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInstructEmbeddings.html |
7c4a109a6b6a-0 | langchain.embeddings.vertexai.VertexAIEmbeddings¶
class langchain.embeddings.vertexai.VertexAIEmbeddings(*, client: '_LanguageModel' = None, model_name: str = 'textembedding-gecko', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5, max_retries: int = 6)[source]¶
Bases: _VertexAICommon, Embeddings
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials: Any = None¶
The default custom credentials (google.auth.credentials.Credentials) to use
param location: str = 'us-central1'¶
The default location to use when making API calls.
param max_output_tokens: int = 128¶
Token limit determines the maximum amount of text output from one prompt.
param max_retries: int = 6¶
The maximum number of retries to make when generating.
param model_name: str = 'textembedding-gecko'¶
Model name to use.
param project: Optional[str] = None¶
The default GCP project to use when making Vertex API calls.
param request_parallelism: int = 5¶
The amount of parallelism allowed for requests issued to VertexAI models.
param stop: Optional[List[str]] = None¶
Optional list of stop words to use when generating.
param temperature: float = 0.0¶
Sampling temperature, it controls the degree of randomness in token selection.
param top_k: int = 40¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html |
7c4a109a6b6a-1 | param top_k: int = 40¶
How the model selects tokens for output, the next token is selected from
param top_p: float = 0.95¶
Tokens are selected from most probable to least until the sum of their
embed_documents(texts: List[str], batch_size: int = 5) → List[List[float]][source]¶
Embed a list of strings. Vertex AI currently
sets a max batch size of 5 strings.
Parameters
texts – List[str] The list of strings to embed.
batch_size – [int] The batch size of embeddings to send to the model
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
validator validate_environment » all fields[source]¶
Validates that the python package exists in environment.
property is_codey_model: bool¶
task_executor: ClassVar[Optional[Executor]] = None¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html |
d003133aaab0-0 | langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding¶
class langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding(*, client: Any = None, model: Optional[str] = 'luminous-base', hosting: Optional[str] = 'https://api.aleph-alpha.com', normalize: Optional[bool] = True, compress_to_size: Optional[int] = 128, contextual_control_threshold: Optional[int] = None, control_log_additive: Optional[bool] = True, aleph_alpha_api_key: Optional[str] = None)[source]¶
Bases: AlephAlphaAsymmetricSemanticEmbedding
The symmetric version of the Aleph Alpha’s semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
text = "This is a test text"
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aleph_alpha_api_key: Optional[str] = None¶
API key for Aleph Alpha API.
param client: Any = None¶
param compress_to_size: Optional[int] = 128¶
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
param contextual_control_threshold: Optional[int] = None¶
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
param control_log_additive: Optional[bool] = True¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html |
d003133aaab0-1 | param control_log_additive: Optional[bool] = True¶
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
param hosting: Optional[str] = 'https://api.aleph-alpha.com'¶
Optional parameter that specifies which datacenters may process the request.
param model: Optional[str] = 'luminous-base'¶
Model name to use.
param normalize: Optional[bool] = True¶
Should returned embeddings be normalized
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Aleph Alpha’s Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields¶
Validate that api key and python package exists in environment. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding.html |
7cba0af7fead-0 | langchain.embeddings.spacy_embeddings.SpacyEmbeddings¶
class langchain.embeddings.spacy_embeddings.SpacyEmbeddings(*, nlp: Any = None)[source]¶
Bases: BaseModel, Embeddings
SpacyEmbeddings is a class for generating embeddings using the Spacy library.
It only supports the ‘en_core_web_sm’ model.
nlp¶
The Spacy model loaded into memory.
Type
Any
embed_documents(texts
List[str]) -> List[List[float]]:
Generates embeddings for a list of documents.
embed_query(text
str) -> List[float]:
Generates an embedding for a single piece of text.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param nlp: Any = None¶
async aembed_documents(texts: List[str]) → List[List[float]][source]¶
Asynchronously generates embeddings for a list of documents.
This method is not implemented and raises a NotImplementedError.
Parameters
texts (List[str]) – The documents to generate embeddings for.
Raises
NotImplementedError – This method is not implemented.
async aembed_query(text: str) → List[float][source]¶
Asynchronously generates an embedding for a single piece of text.
This method is not implemented and raises a NotImplementedError.
Parameters
text (str) – The text to generate an embedding for.
Raises
NotImplementedError – This method is not implemented.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Generates embeddings for a list of documents.
Parameters
texts (List[str]) – The documents to generate embeddings for.
Returns
A list of embeddings, one for each document.
embed_query(text: str) → List[float][source]¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.spacy_embeddings.SpacyEmbeddings.html |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 14